#27 |
2022 - 2023 |
|
|
Virtual Reality, XR |
XR, Unity, Oculus, Metaverse |
|
|
Abstract
A virtual reality-based serious application to conduct classes and meetings in the virtual world of the Metaverse. It will provide a virtual learning environment where teachers and students can present course materials, involve and interact with other members of the virtual class, and work together in groups.
#26 |
2021 - 2022 |
|
|
Knowledge Graph, Graph Visualization |
Node.js, Angular, Express, MongoDB, jSon, React D3 |
|
|
Abstract
Citation-Rope is a visualization of citation graph. Citation-Rope is a unique, visual tool to help researchers and applied scientists to find and explore papers relevant to their field of work. Citation-Rope helps you explore literature in your field of research. You can easily generate/build graphs (Force-Directed Graph) that pull-out papers on similar topics. It is a handy tool that complements traditional database discovery using keywords and citation links. It is especially helpful when you are building your understanding for literature review. You enter an origin paper, and Citation-Rope generate a force-generated graph that shows that section of paper-space and its interconnections.
#25 |
2021 - 2022 |
|
|
Computer Networks, Protocol Analysis |
Wireshark, PHP/MySQL, HTML/CSS, BootStrap |
|
|
Abstract
This project is related to what happens in a network when communication is done through packets. In order to analyze a packet, you first need to capture it. This web application will only analyze the packets not capture them. Packets can be captured through Wireshark. Queries will be applied to analyze the capture packets. Info about different protocols will be analyzed e.g., HTTP, DNS, Telnet, ARP, FTP. Malicious packets will be detected. Input file that will be analyzed will be required by the product and the user will provide it. Website will be hosted through a third-party. This product is built using Visual Studio Code and uses MySQL database. User will need access to Internet and a web browser to use this application and benefit from it. The purpose of this web application is to allow a user to know what kind of activity is being done on a network.
#24 |
2020 - 2021 |
|
|
Knowledge Graph, Semantic Web, Health Informatics |
Google Colab, Python, TensorFlow, Spacy |
|
|
Abstract
A knowledge graph acquires and integrates information into an ontology and applies a reasoner to derive new knowledge. In other words, a knowledge graph is a programmatic way to model a knowledge domain with the help of subject-matter experts, data interlinking, and machine learning algorithms. It is a powerful way of representing data because it can be built automatically and can then be explored to reveal new insights about the domain and data retrieval can become fast.
The characteristics of knowledge graph are: it mainly describes real world entities and their interrelations, organized in a graph; defines possible classes and relations of entities in a schema; allows for potentially interrelating arbitrary entities with each other and covers various topical domains.
*Knowledge graphs on the Semantic Web are usually provided using Linked Data Linked Open Data allows the extension of the data models and easy updates. It makes data integration and browsing through complex data become easier and much more efficient. We will be Constructing Knowledge Graphs using Linked Open Data because in our problem we need live and updated information about drugs and its relations as new drugs are discovered more frequently.
Drug-Drug Interaction (DDI) is defined as a change in the effects of one drug by the presence of another drug. Drug interactions are very dangerous and fatal, and they need to be detected. The problem is that one patient may see many different doctors and chances are they are not aware of the possible drug interactions prescribed to them. Also, patient groups such as elderly patients and cancer patients are more likely to take multiple drugs at the same time and it increases their risk of DDIs.
#23 |
2020 - 2021 |
|
|
Machine Learning, NLP, Sentiment Analysis, Linked Open Data |
Google Colab, Python, Tweepy, Spacy, NLTK |
|
|
Abstract
Our Project is Mainly based on Semantic algorithms and machine learning Principles to find correlation between public mood and stock market index, According to most researches stock market is difficult to predict most of the time stock market index depend upon public mood so for this purpose we need public mood analysis with help of twitter tweets
The Efficient Market Hypothesis states that stock market indexes are largely driven by new information and follow a Random walk
The technique we use in this project is based on Bollen et al. The Raw (Stock Market) data fed into the pre-processor to obtain the processed values.
At the same time we use Semantic algorithms for public mood analysis after this we fed all our value
#22 |
2020 - 2021 |
|
|
Machine Learning, Churn Prediction, Data Science |
Google Colab, Python |
|
|
Abstract
Customer churn happens in the Software-as-a-Service business similarly as it is in subscription-based industries like the telecommunications industry. But companies lack the knowledge about the factors lead to customers churn and are unable to react to it in time. it is necessary for companies to research customer churn prediction in order to react to customer churn in time.Customer churn happens in almost any business area and organizations must be able to handle it properly. The advances in information technology have brought massive opportunities in customer churn research, both in predicting and analyzing. Predicting customer churn has become a very relevant topic for many large companies. Especially in subscription-based business models, such as Software-as-a-Service models, knowing the reason for customer churn and when it’s about to happen is essential for competing in the ecosystem.
#21 |
2020 - 2021 |
|
|
Web Application, Responsive, Project Management |
HTML CSS, BootStrap, Angular JS, MongoDB, Full Stack |
|
|
Abstract
Final Year Project is an important aspect of higher education degrees which enables students to apply broad range of knowledge to develop a project which can bring together the knowledge and competences acquired throughout their studies while developing other useful skills. For making this opportunity more accessible and easier, we aim to develop a complete FYP management system for faculty and students.
The name of this project is AichQ-Project Management System for BS FYPs. The main purpose of this project is to develop an online system that can reduce the workload of the FYP Committee in managing the workflow of FYPs. Besides reducing the workload, it also hastens the steps for the students by making the data organized and accessible. By implementing this system, it would provide a lot of conveniences to the BS students and FYP Committee of Air University Islamabad while saving time and cost.
AichQ is a complete project management system for FYP students and Faculty. The system itself will be a full stack web application. All three phases of FYP and its components from idea to evaluation will be handled by the system. Students will be able to select the project topic and then propose their idea to respective supervisor. System will provide the ability to submit documents and other deliverable with the notification for the coming deadlines. Supervisor will be able to evaluate the work submitted by the student through this system.
#20 |
2019 - 2020 |
|
|
Machine Learning, Speech Analysis, IMCC |
PyCharm, TensorFlow, Librosa, Keras, Scipy, Sk-learn, Matplotlib, Numpy, Pandas |
|
|
Abstract
This thesis topic is about the detection of emotions through speech patterns. The main concept behind our project is SER that is Speech Emotion Recognition which is the detection of human emotions from speech. It focuses on the point that the human voice often reflects and resonates the emotion behind it. Some of the main classes of emotions that we have worked on are happiness, sadness, fear, neutral, anger, boredom, disgusted etc.
One of the main objectives of our project was the collection of data of audio files. With the help of different databases, we obtained the required data sets. Extraction of features out of those audio files was done. Every audio file has various underlying features which are the basis of the differentiation among the audios. Some of those features are Pitch, Frequency, MFCCs, and Spectral Roll Of etc. For the implementation, we worked with Chroma – Short Time Fourier Transform, RMSE, Spectral Rollof, Spectral Centroid, Spectral Flux, Spectral Bandwidth, Zero Crossing Rate (ZCR) and 20 sets of MFCCs
The basic problem that we found a solution to, was the maximum accuracy of the detected emotion. A single feature applied on an algorithm, generates a result that is a value in percentage might be totally different to the value generated by the combination of two features applied on the same algorithm.
We chose three different datasets from RAVDESS, SAVEE and EMODB Berlin, respectively. For each of these data sets we extracted the features and made the combinations out of them to create feature vectors. Once, it was done, 3 different machine learning models CNN, MLP and SVM were introduced. We applied these algorithms one by one on each dataset to find the accuracy values. All 3 of the datasets generated different accuracy values for each of the model. We found that the dataset from EMODB Berlin had proven to work the best for us. When SVM was applied on it, it gave an accuracy of 82.00 % which is higher than the research paper we referred for this experimentation.
We worked with deep learning as well and chose to apply CNN on one of the datasets. WEKA does not support CNN, so we implemented it using Python. The accuracy value we obtained was 72.00 % using EMODB Berlin.
After these two, the same dataset was used and MLP was applied on it. This time it gave the highest accuracy of 85.35 %.
The implicit aim was the processing of the extracted feature set by machine learning. We applied different algorithms to detect the emotions and generate its accuracy. Since, CNN could not be tested on WEKA its Python implementation was done. This way we had three multiple accuracy values against the three datasets, when 3 different machine learning algorithms were applied on them, respectively.
We analyzed and compared the values from all of these approaches. And our goal was to find the best-fit algorithm for a specific feature set which turn out to be an achievement for us since we got 82.00 % and 85.35 % with SVM and MLP respectively.
#19 |
2019 - 2020 |
Funded By: IGNITE NGIRI (PKR 58,000)
|
|
Machine Learning, Health Informatics, Gait Analysis, Wrist Wearable, IOT |
Pandas, Matplotlib, Sklearn, Glob, TensorFlow, Apachi POI, Google API, SQ Lite, Android Studio, FitBit Api |
|
|
Abstract
Fall-related injuries are the most common cause of accidental death in those over the age of 60, resulting in approximately 41 fall-related deaths per 100,000 people per year. Billions of dollars are cast-off for the cure of these fall injuries. Keeping this in mind fall detection has become a major challenge in the public healthcare domain, especially for the elderly as the decline of their physical fitness, and timely and reliable surveillance is necessary to mitigate the negative effects of falls. An effective fall detection system is needed to lessen this immense price and also the pain and agony the elderly bears due to fall injuries. For this purpose, we are contributing our part by creating a handy and efficient software induced hardware system that indents to give timely services to our subjects. Our project basically is a concept of Smart Environment that aims at providing ease to its users. In this paper, we have designed a Model to predict falls using the Naïve Bayesian classifier. Three datasets are used for the research Smartfall, Farseeing and Mobile Dataset. Two of these were secondary datasets where as one dateset was collected by our own team member by attaching and android mobile device to the wrist. The data we required for processing falls is accelerometer sensor data, for this a 3-axial accelerometer is required for this research. However, as a cost-effective and a handy system is required so we propose the use of wrist wearable device as a medium to get data from. A protype android application is also created for making it clear how our fall detector is supposed to work in a real-time situation.
#18 |
2019 - 2020 |
Funded By: IGNITE NGIRI (PKR, 55,000)
|
|
Machine Learning, Health Informatics, Supervised Learning |
Pandas, Numpy, Matplotlib, Scikit-Learn, Google Collab |
|
|
Abstract
Diabetes mellitus has affected 382 million people worldwide and there is an increasing number of people with type 2 diabetes in every region. Diabetes can cause many complications if left untreated. By integrating the difference between data sets and human intelligence, machine learning has made it possible for medical professionals to promote disease diagnostics. We may begin to implement machine learning strategies for classification in a data set that represents a community at high risk of developing diabetes.
The sample for this research was the PIMA Indian population. Since 1965, the population has been under continuous study by the National Institute of Diabetes and Digestive and Kidney Diseases due to its high prevalence rates of diabetes. With the help of this dataset that we have collected on patients, we will be able to make accurate assumptions on how likely an individual is to suffer from the occurrence of diabetes and then take appropriate action.
The goal of this project is to predict type 2 diabetes based on a dataset. It uses a machine learning model that is trained to predict diabetes mellitus before it hits. This is done using multiple machine learning algorithms to select which is best.
Our study begins with a thorough look at how researchers who used the same dataset tackled the same question. This allowed us to develop an understanding of the data and prepare the way for our report, especially as the authors proposed alternate approaches worth studying in.
Four machine learning classifiers are used to train multiple models that are used to predict positive or negative outcomes. BMI (Body Mass Index), Age (Age), Glucose Level, are
ii
given set of inputs. Based on those features, we predict whether or not a patient has diabetes.
#17 |
2019 - 2020 |
Best Project Award - Pakistan Science Fair 2019
|
|
Ontology, Sign Language, Bilingual, NLP |
NLTK, POS Tagger, Protege, RDF/RDFs, OWL |
|
|
Abstract
With an estimated nine million people with impaired hearing living in Pakistan, there is a growing need to integrate them into mainstream society through the use of efficient sign language processing technologies. The native Language of Pakistan Deaf Community is Pakistan Sign Language. Pakistan Sign Language has an official website, which provides a dictionary composed of words and their associated signs. In this research, we have proposed a Pakistan Sign Language Ontology (PSLO) that provides abstract conceptualization required to organize Pakistan Sign Language for both English and Urdu (Language Used in Pakistan) Language and its instantiation on 5,500 individual terms. PSLO has been verified, validated, and assessed by using the well-defined procedures and tools, proposed in the literature for ontology evaluation. The results show that the proposed ontology is concise, complete, and consistent. With help of our proposed ontology, individuals with hearing disabilities can actively engage in communication with people who do not speak sign language.
#16 |
2019 - 2020 |
|
|
Knowledge Graph, Scientometrics, Citation Analysis, NLP, Semantic Web |
Spacy, Ampligraph, Tensorflow, Numpy, Pandas, Matplotlib, Seaborn, Scikit learn, Xgboost |
|
|
Abstract
Knowledge graphs on the web have become the backbone of many information retrieval systems
that require access to structural knowledge, whether it be domain-specific or
domain-independent. With the increase in the knowledge ingested, the size of the Knowledge
Graph increases, it becomes infeasible for large-scale Knowledge Graphs to provide the desired
results due to inefficiency in computing and data sparsity. Tackling this issue different
knowledge graph embedding models are introduced. We are going to implement Knowledge
Graph embedding on ACL Anthology Network (AAN) dataset and classify the relations between
the papers based citations’ text to Citations’ Context and Reasons Ontology – CCRO classes
where each class has its distinct context. For this purpose we require to implement Natural
Language Processing NLP to the citations’ text and then convert these in the form of triples.
Applying Knowledge Graph embedding and then classifying into classes.
#15 |
2019 - 2020 |
|
|
Machine Learning, Audio Pattern Recognition, Regression |
Pandas, SKlearn, PyAudio Analysis, LibROSA, Matplotlob, Joblib, Transformer API, Numpy, Pickle |
|
|
Abstract
The originality of the proposed approach lies in its ability to detect and analyze automatically distress signals, which recurrently affects 20 to 25% of newborns. Different types of cries have different time domain, frequency domain, and energy spectrum. The difference of these acoustic parameters provides the basis for the realization of sound classification. In our project, the Mel frequency cepstral coefficients of the voiced segment which is used as the primary characteristics of the signal. The classification is performed using ensemble learning methods after a stage of feature selection. The utilization of pre-crying signals to enhance the quality of the recognition is another important aspect of the proposed approach, which optimizes the accuracy of the learning step as it is shown by the obtained results on the collected dataset. This result gives the opportunity to develop new baby monitors able to predict the infant’s needs.
In addition, because the cry of the baby may con
#14 |
2019 - 2020 |
|
|
Deep Learning, Health Informatics, CNN, Supervised Learning |
Tensorflow, Github, Keras, Inception V3,
Sigmoid, Max Pooling, Relu Activation Function, Matplotlib |
|
|
Abstract
Breast cancer is among world’s second most occur-ring cancer in all types of cancer. Most common cancer among women worldwide is breast cancer. There is always need of advancement when it comes to medical imaging. Early detection of cancer followed by the proper treatment can reduce the risk of deaths. Machine learning can help medical professionals to diagnose the disease with more accuracy. Where deep learning or neural networks is one of the techniques which can be used for the classification of normal and abnormal breast detection.
Deep learning is a machine learning technique in which a computer model performs classification tasks directly learning from text, images or sound. Models are trained on a large number of datasets and CNN architectures containing many layers. In medical imaging deep learning is used to detect cancer cells automatically. Training a deep convolution network from start is difficult because it needs large amount of data for training.
CNN can be used for t
#13 |
2019 - 2020 |
|
|
BlockChain, Ethereum, Cyber Security |
|
|
|
Abstract
Project DEVA is going to leverage the open source Ethereum Blockchain technology to propose a design for a new electronic voting system that could be used in local or national elections as well as voting schemes on any level. The Blockchain-based system will be secure, reliable, and anonymous, and will help increase the number of voters as well as the trust of people in their governments.
Project DEVA has been designed to adhere to fundamental e-voting properties that are verifiability, authentication, anonymity and privacy, and to implement a secure and safe electoral voting process.
#12 |
2019 - 2020 |
|
|
Ontology, Semantic Web, NLP |
RDF, RDFs, OWL, SWRL Rules, Protégé |
|
|
Abstract
Levin verb ontology (LVO) contains explicit syntactic and semantic information for classes of verbs as defined by Beth Levin. Levin starts with the hypothesis that a verb’s meaning influences its syntactic behavior and develops it into a powerful tool for studying the English verb lexicon.
Levin’s verb classes are based on the ability of a verb to occur or not occur in pairs of syntactic frames that are in some sense meaning preserving (diathesis alternation). Levin’s verb inventory covers 3,024 verbs. Although only 784 verbs are polysemous verbs, the total frequency of polysemous verbs in the British National Corpus (BNC) is comparable to the total frequency of monospermous verbs (48.4%:51.6%).
The focus is on verbs for which distribution of syntactic frames is a useful indicator of class membership, and, correspondingly, on classes which are relevant for such verbs. By using Levin’s classification, we obtain a window on some (but not all) of the potentially useful semantic propert
#11 |
2018 - 2019 |
Funded By: Air University (PKR. 150,000)
|
|
Virtual Reality, 360 Experience |
3D Vista, Adobe Photoshop, CorelDraw |
|
|
Abstract
To promote Air University Nationally and Internationally, To provide 360 Degree Experience to Prospective Students, To provide Virtual Tour for the parents
#10 |
2018 - 2019 |
Best Project Award - ICTMPCL 2019
|
|
Semantic Web, Linked Open Data, Faceted Search, Bio2RDF |
Microsoft .NET Framework, Android Studio, RDF/RDFa, SPARQL, Python, NLP, Bio2RDF |
|
|
Abstract
Biomedical information is developing at an unbelievable pace and requires considerable aptitude to arrange information in a way that makes it effortlessly accessible, open, interoperable and reusable. In the pharmaceutical field, because of the enormous number and extensive variety of the medications available, a viable semantic recovery framework can bring great convenience to doctors and pharmacists. PHASSE is a semantic AI based user friendly interface for querying Bio2RDF linked open data using natural language processing.
#9 |
2018 - 2019 |
|
|
Gaming Machine Learning, Deep_Learning, TensorFlow |
Unity3D, Tensor-Flow Hub |
|
|
Abstract
Machine Learning is changing the way we expect to get intelligent behaviour out of autonomous agents. Whereas in the past the behaviour was coded by hand, it is increasingly taught to the agent (either a robot or virtual avatar) through interaction in a training environment. This method is used to learn behaviour for everything from industrial robots, drones, and autonomous vehicles, to game characters and opponents. The quality of this training environment is critical to the kinds of behaviours that can be learned, and there are often trade-offs of one kind or another that need to be made. The typical scenario for training agents in virtual environments is to have a single environment and agent which are tightly coupled.
In this FYP, using Unity, we want to design a system that provide greater flexibility and ease-of-use to the growing groups interested in applying machine learning to developing intelligent agents. Moreover, we want to do this while taking advantage of the high-qu
#8 |
2018 - 2019 |
|
|
Semantic Web, Ontology, LATEX |
Microsoft .NET MVC, Microsoft SQL Server, RDF/RDFa, SPARQL, GraphViz,
OWL |
|
|
Abstract
Semantic Authoring and Publishing is an ecosystem to alleviate the information overload problem. Our solution relies on enriching scientific publications with explicit rhetorical and argumentation discourse structures, using Citations Context and Reasons Ontology - CCRO (I. Ihsan & M. A. Qadir, 2018) by identifying and classifying citation texts in LaTeX files. Embedding these structures within RDF Data Store enables the creation of semantic publications and visualization and becomes a foundation artifact for the Semantic Publishing Ecosystem and linked resources part of the current Web of Data.
#7 |
2018 - 2019 |
|
|
Scientometrics, Data Science, NLP |
Microsoft .NET Framework, Microsoft SQL Server, Python NLTK, SPYDER, D3 JS Visulaization |
|
|
Abstract
Scientific Research is being carried out across the globe in numerous fields. Evaluation of a scientific research in a scholarly big data is reliant on bibliometric indicators or citations. For these indicators to work, the scientific research has to be published and indexed. The conferences and journals do not have such bibliometric indicators to measure the quality of a submitted research paper. Therefore, there is a need of an analytic system that can easily and quickly measure the quality of an un-published research paper before it goes in review process. This research paper defines an analytic scheme to measure the quality of a research paper by measuring the quality of references used within the paper. References are a list of sources that represents the best documents selected by the author to layout the foundation of his/her research paper. Thus, an initial check on the quality of references, selected by the author, can provide a valid indication about the submitted paper. The
#6 |
2018 - 2019 |
|
|
eLearning, Collaborative Learning |
Microsoft .NET MVC, Microsoft SQL Server, Node JS, HTML5, Web Sockets,
WebRTC |
|
|
Abstract
An E-learning platform that facilitates teachers to teach with real time communication and assists students in their collaborative learning.
#5 |
2017 - 2018 |
Microsoft Imagine Cup 2018 - Regional Winner
|
|
AI, Machine Learning, Bio Tech, Wearable |
Microsoft .NET Framework, ASP.NET MVC, ARDUINO, Bayesian Network, SQL Server, Microsoft Azure |
|
|
Abstract
Health Hub is an artificial intelligence based network with a primary objective is to monitor and predict heart attacks well before time. It provides a platform for doctors where they can have real time monitoring of their patients from all around the world and stay tuned about them using health summaries and alerts in case of any abnormalities. It also provides users a platform to have their instant health check-up anywhere and anytime on a single click.
#4 |
2016 - 2017 |
First Prize Winner - AirTech 2016
|
|
Semantic Web, NLP, Sentiment Analysis |
Microsoft .NET Framework, C#, Python, GraphViz, TikkaDotNet, Stanford Core NLP, Lemma Sharp, NLTK, Metro Framework |
|
|
Abstract
CSAT is a Citation Sentiment Analysis Tool that takes two way path to find the sentiment polarity of a citation. In one direction it uses standard nGram Sentiment Analysis algorithms along with Porter Stemmer and Text Blob. And in second direction it uses Ontology based approach. By extracting verbs from a citation after applying Stemming, Lemmatization and POSTagger, the system creates a weighted match with CCRO Ontology to determine its sentiment polarity. CSAT uses ACL Anthology Network – AAN Citation Dataset to create a visualization of both techniques using RDF based Citation Graph.
#3 |
2016 - 2017 |
|
|
eLearning, Semantic_Web, Noorani Qaida |
APS.NET, C#, SPARQL, RDF/RDFs, dotNetRDF, QuickGraph, Graphviz, HTML5, CSS, Bootstrap, JQuery |
|
|
Abstract
The desire to recite Quran will continue to exist for generations to come. Nourani Qaida plays a pivotal role in laying the foundation for proper Tajweed. Teaching Tajweed is a manual process performed by Qari. Students come from different backgrounds and have different learning curves. To monitor and track an individual progress can be logged using a computer assisted learning. Teachers check each student on certain key points such as ability to recognize and pronounce Arabic characters, distinguish between individual characters and to realize the correct origin of sound.
SABAQ is a Semantic based e-Learning and Experience Tracking application, that a teacher can use to provide a systematic view of Nourani Qadia for eLearning meanwhile tracking each student against the activities within it. Using semantic based technologies, SABAQ creates an RDF based graph for each student that can be visualized to make intelligent decisions for a particular student and its learning curve.
#2 |
2015 - 2016 |
|
|
Citation, Data Mining, Web Services |
ASP.NET MVC, C#, HTML 5, BootStrap,
JavaScript, Microsoft SQL Server |
|
|
Abstract
A web based application that helps researchers to create and manage their publications with ease and a plugin that allow researchers to integrate their publications in their respective websites, saving time and money. OBE searches and inserts publication both automatically using DIO, BibTex files and manually.
#1 |
2014 - 2015 |
|
|
ELearning, MOOCs, SemanticWeb |
Microsoft .NET Framework, Semantic UI, HTML 5, CSS 3, JavaScript |
|
|
Abstract
Microsoft PowerPoint is the most popular e-learning content development tool. The effectiveness and easy to use of PowerPoint together with third party tools seems to make the idea to develop e-learning with PowerPoint work but PowerPoint cannot be considered as a premier e-learning tool. The reason might be the drawbacks of PowerPoint in terms of its file size, inter-operability and openness. This research describes architecture of a new e-learning content development tool that not only has the look and feel of PowerPoint but also overcomes the drawbacks of PowerPoint. We call this tools as S-Point - A Semantic Based eLearning Content Development tool that is based on the Open Reusability Benchmark. With help of this tool, teachers can develop learning objects that are more semantically meaningful and semantically searchable.