Call for Students for Working in AI-based Projects

The Amity Centre for Artificial Intelligence (E3G – R.N.16) invites students of AUUP across all domains to join AI-based research projects.

Highlights

  • World-Class Computing Infrastructure: The Centre is powered with one of the world's best supercomputing facilities, including DGX-2, A100, with 10 petaFLOP computing power.
  • Diverse Project Portfolio: A list of 100+ AI-based projects is available, with applications in various domains.
  • Open to All AUUP Institutions: Students from various institutions of Amity University, Uttar Pradesh can apply to join the research projects.
  • Limited Seats Available: Early application is encouraged.

Requirements

  • Knowledge of Python coding is preferred.
  • Once a project is allocated, a mentor will be assigned.

Benefits and Outcomes

  • Academic Integration: Students can choose these projects as their Minor Project / Major Project / Dissertation.
  • Research Opportunities: All projects are designed so that students can publish research papers or file patent applications.
  • International Exposure: Opportunities to collaborate with top international scientists.
  • Career Advancement: These projects will add great value for placements and university admissions in India and abroad.

How to Apply?

Contact Information

For more information

  • Email: ai@amity.edu
  • Meet the Director, Centre for Artificial Intelligence in Room No. E3G-12B

AI based Projects for B.Tech / M.Tech Projects / Dissertation.

Amity Centre for Artificial Intelligence, (Location: E3G- R.N.-16) invites Students of Amity across all Domains to join in AI-based Projects.

Selected Research Problems: Set-2

  1. Fine-tuning Language Models for Chatbot Development for Engineering Colleges Across India This proposal outlines the development of a chatbot tailored for engineering colleges across India by fine-tuning a pre-existing language model using a domain-specific dataset. By leveraging several gigabytes of text data sourced from educational and technical domains, this intelligent conversational agent aims to assist students, faculty, and staff in addressing academic queries and navigating resources. The project focuses on optimizing the chatbot's relevance and responsiveness to enhance the support systems available within the academic environment.
  2. Development of a State-of-the-Art Automatic Fact-Checking System for India This proposal outlines the development of a state-of-the-art automatic fact-checking system tailored for the Indian context. By scraping and compiling data from diverse credible sources, we aim to train and evaluate a fact-checking model that leverages advanced machine learning techniques. This system will analyze claims and verify their accuracy against a curated dataset, addressing the critical challenge of misinformation and disinformation in India.
  3. Development of Mini Language Models for Low-Resource Languages: Focus on Nepali This proposal outlines the development of mini language models (LLMs) tailored for the Nepali language, a low-resource language. By leveraging transfer learning, multilingual embeddings, and curated datasets, we aim to design, train, and deploy efficient LLMs that address the unique challenges of Nepali morphology and syntax. This initiative will empower NLP applications such as translation, summarization, and sentiment analysis, advancing linguistic inclusivity and technology accessibility for Nepali speakers.
  4. “Mental Health Analysis. This project focusses on developing systems to detect signs of mental health conditions (e.g., depression, anxiety) from textual inputs, such as social media posts or chat transcripts. By employing a range of classification algorithms and text vectorization techniques, this project aims to develop advanced deep-learning models that effectively identify addiction, alcoholism, anxiety, depression, and suicidal thoughts within text-based discussions.”
  5. Sentimental Analysis in Social Media Visual Question Answering (VQA). Sentimental Visual Question Answering (VQA) combines sentiment analysis and visual understanding, aiming to answer questions about images while considering emotional and affective cues. The project focuses on developing an advanced deep learning-based model to analyse sentiment in social media posts by generating emotion-driven answers to questions about images. This further enables nuanced emotion recognition in images and text, leading to more accurate and empathetic answers.
  6. Dense Radiology Report Generation Framework for Medical Images. Medical image captioning is a challenging yet impactful task in healthcare and artificial intelligence. The generation of long and coherent reports highlighting correct abnormalities is a challenging task, in this direction this project focusses on developing a deep-learning-based radiology report generation framework. The proposed framework extract intricate features from medical images, enabling precise and detailed reports that support accurate diagnoses.
  7. Context-Aware Multimodal Video Captioning with Temporal Dynamics Video captioning is a rich research area, blending computer vision and natural language processing (NLP) to describe events in videos. This project focus on development of a deep-learning model capable of generating contextually rich, temporally coherent captions for videos by integrating visual, audio, and temporal cues. This project combines video frames, audio features, and potential textual overlays (e.g., subtitles) to capture evolving actions, events, and transitions in the video for the generation of coherent captions.
  8. Deep learning approach for change detection of infrastructure and its prediction using satellite images. This project focuses on detecting changes in infrastructure over time using multi-temporal satellite imagery using transformer models. The goal is to develop a transformer-based model that can automatically extract features, identify structure modifications, and assess urban development. The project may integrate fusion techniques and multimodal datasets to explore different urban feature characteristics. Additionally, predictive models can be integrated to forecast future infrastructure changes based on historical data and spatial trends. The project aims to provide high accuracy in monitoring urban growth, disaster recovery, and planning sustainable infrastructure, making it invaluable for urban management and policy development.
  9. Designing a graph-based neural network to automate road extraction and maintain connectivity. This project will leverage a graph-based neural network (GNN) to automate road extraction tasks from geospatial imagery and ensure road connectivity. The pipeline includes constructing a graph where nodes represent key features such as road intersections or segments, and edges capturing the spatial relationships. This project incorporates spatial and topological features and also attention mechanisms to handle noise, and occlusions effectively. The expected output will be building a GNN that ensures both precise road segmentation and connectivity optimization, crucial for applications like urban planning, navigation systems, and disaster response.
  10. Development of unsupervised domain adaptive deep learning models for aerial and satellite imagery. The project focuses on exploring different techniques of adversarial learning and self-supervised learning, that can generalize across diverse domains of aerial and satellite imagery without requiring labeled data from the target domain. The model framework will be trained to adapt to variations in resolution, spectral properties, and geographical contexts. The framework also explores extracting invariant features from source and target domains, ensuring high-performance tasks such as image segmentation, object detection, and land cover classification. This project will particularly aid in scenarios where labeled data for target domains is scarce or unavailable, therefore enhancing model scalability and usability across varied geographic regions.
  11. LLM Quantization for Efficient Deployment of Language Models: This project focuses on the quantization of large language models (LLMs) to make them more efficient for deployment on devices with limited resources, such as mobile phones and embedded systems. The goal is to explore various techniques of quantization (e.g., weight pruning, activation quantization) and evaluate the trade-off between model size, inference speed, and accuracy. Students will experiment with different quantization levels and investigate how these impact the performance of LLMs in various real-world applications like chatbots, sentiment analysis, and recommendation systems. The project can also include creating a small-scale application for testing the quantized model on a local device.
  12. Visual Question Answering (VQA) Using Deep Learning Models: The Visual Question Answering (VQA) project involves developing a deep learning system that can answer natural language questions related to images. The project will integrate computer vision and natural language processing (NLP) to understand the content of an image and generate appropriate answers to user queries. The students will work on designing and training a VQA model using popular architectures such as CNN (for image feature extraction) and LSTMs or transformers (for question understanding). The dataset could include images with questions related to objects, relationships, and scene context. Additionally, students can experiment with improving the accuracy of the VQA model by fine-tuning it with various pre-trained models like BERT or GPT, and test it on multiple domains (e.g., general knowledge, medical images, or fashion).
  13. Unusual Activity and Sound Detection for Surveillance in Defense: This project focuses on developing an intelligent surveillance system for defense applications that detects unusual activities and sounds using machine learning. Students will train models to analyze video footage and audio data from surveillance devices to identify suspicious behavior, such as unauthorized movement or combat-related sounds (e.g., gunshots, explosions). The project will involve using computer vision techniques (CNNs) for activity recognition and audio classification models (e.g., CNNs or LSTMs) to detect specific sounds. The models will be integrated to provide real-time alerts for security applications in defense and border surveillance.
  14. Advanced driver-assistance systems: (Lead from Amity Noida: Dr. Sneha Sharma, Other Lead: Prof Alfredo Rosado from University of Valencia, Spain) AI is revolutionizing driver behavior analysis and optimization by leveraging real-time data and machine learning algorithms. Advanced driver-assistance systems (ADAS) can monitor various factors such as speed, acceleration, braking, lane keeping, and proximity to other vehicles. By analyzing this data, AI can identify risky behaviors like harsh braking, aggressive acceleration, and distracted driving. This information can be used to provide personalized feedback to drivers, encourage safer habits, and even predict potential accidents before they occur. Ultimately, AI-powered solutions aim to improve road safety, reduce accidents, and promote fuel efficiency by fostering optimal driving behavior.
  15. Wearable sensor-based multimodal system for detection and feedback generation of physiotherapy exercises This study presents a wearable sensor-based multimodal system designed to enhance the accuracy and efficiency of physiotherapy exercises. The proposed system integrates multiple sensors, including accelerometers, gyroscopes, and electromyography (EMG) devices, to capture detailed motion and muscle activity data during exercises. Advanced signal processing and machine learning algorithms are employed to analyze the data, ensuring precise detection of exercise patterns and deviations. Additionally, real-time feedback generation provides users with corrective guidance, enabling proper execution of exercises and reducing the risk of injury. The system aims to assist both patients and physiotherapists by improving exercise adherence, monitoring progress, and delivering personalized rehabilitation solutions.
  16. Underwater Object Detection Using Deep Learning Techniques. The project focuses on developing a deep learning-based underwater object detection system to address challenges like poor visibility, light distortion, and noise in underwater environments. It aims to develop state-of-the-art deep learning models like YOLO or Faster R-CNN to detect and classify underwater objects accurately in real-time. The system will be designed for applications in marine research, submarine navigation, underwater exploration, and security.
  17. 3D Object Detection Using Deep Learning Techniques For Self-Driving Cars. This project aims to develop a 3D object detection system using deep learning techniques to enhance the perception capabilities of self-driving cars. By leveraging advanced models like PointNet++, PV RCNN, or 3D SSD, the system will accurately identify and localize objects such as vehicles, pedestrians, and obstacles in 3D space. It will process data from LiDAR, cameras, and radar sensors, combining them to create a comprehensive environmental map. The project focuses on optimizing detection for real-time performance, ensuring safety and efficiency in autonomous driving.
  18. Camouflage Object Detection Using Deep Learning Techniques For UAV data. This project focuses on developing a camouflage object detection system using deep learning techniques for UAV (Unmanned Aerial Vehicle) data. Camouflaged objects, such as wildlife, military equipment, or concealed threats, are difficult to detect due to their ability to blend into surroundings. The proposed system will use advanced deep learning models, like Mask R-CNN or Transformer-based architectures, combined with image enhancement techniques to improve detection accuracy in complex backgrounds.
  19. Decoding Cognitive Distortions: Advancing Mental Health Assessment "Decoding Cognitive Distortions: Advancing Mental Health Assessment" explores innovative approaches to identify and analyze cognitive distortions using advanced artificial intelligence techniques. This work aims to enhance the accuracy and efficiency of mental health diagnostics for better therapeutic outcomes.
  20. Empowering Women and Children: AI Solutions for Combating Harassment and Supporting Mental Health This project will leverage advanced deep-learning techniques to combat harassment and promote mental health among women and children. By utilizing data from wearable devices, it aims to develop an automated system for stress detection and emotional well-being support.
  21. Brain-Computer Interface (BCI) for Motor Imagery Tasks Using Deep Learning This project aims to advance Brain-Computer Interface (BCI) technology by leveraging deep learning to classify motor imagery patterns from EEG data. By training neural networks to recognize distinct EEG features associated with imagined movements (e.g., hand or foot movements), this project will enable accurate, real-time classification essential for BCI systems. The AI-driven approach enhances traditional methods by automating feature extraction and adaptation to unique EEG patterns, improving accuracy and reducing calibration time. This work holds promise for empowering motor-impaired individuals with control capabilities, fostering the development of assistive technologies.
  22. AI-Enhanced Epileptic Seizure Prediction Using Sequential Deep Learning Models This project seeks to develop an advanced AI framework for real-time seizure prediction, utilizing EEG data and deep learning architectures such as Long Short-Term Memory (LSTM) networks. The primary goal is to detect subtle, pre-seizure EEG patterns by training models to identify complex temporal dependencies, offering more accurate and timely warnings. By harnessing AI’s ability to handle high-dimensional, sequential data, this model aims to surpass traditional methods in prediction accuracy, ultimately providing epileptic patients with critical alerts. This innovation could transform epilepsy management, leading to safer, more independent living for patients.
  23. Cross-Modal AI for Enhanced Stress Detection Using EEG and Physiological Data This project proposes an AI-driven, multimodal stress detection framework that integrates EEG with additional physiological signals (such as heart rate and skin conductance). Using a combination of neural networks for feature fusion, the system aims to capture a comprehensive understanding of stress biomarkers. By merging modalities, the AI framework will increase classification accuracy, offering a robust tool for real-time stress monitoring. This cross-modal approach has potential applications in wearable technology for workplace well-being, mental health assessment, and biofeedback-based stress management solutions, providing a holistic view of stress in naturalistic settings.
  24. AI and Wearable-Based Stress Recognition This project will utilize a deep learning approach to develop a system for stress recognition incorporating data from wearable devices. The models will utilize advanced neural networks to analyze physiological data, such as skin conductance, and body temperature, to accurately recognize stress and enhance mental well-being. This AI-driven solution aims to improve the accuracy and reliability of stress recognition, reduce the dependence on subjective self-reports, and offer a continuous, automated approach for stress recognition.
  25. Explainable Affective State Recognition with AI This project proposes to develop an AI-driven system for recognising affective states, including stress and amusement, with a significant emphasis on explainability. The project uses explainable AI techniques to enhance the transparency and interpretability of the system's predictions. This will assist users and healthcare experts in understanding the factors influencing each prediction, hence augmenting trust in AI decisions. This model aims to develop a system that accurately recognizes affective states and explains the physiological factors that drive them.
  26. RAGNet: Advancing RAG-based Conversational AI. This project is about reinventing conversational AI systems through the integration of advanced RAG-based (Retrieval-Augmented Generation) question-answering capabilities and modern language models like GPT (Generative Pre-trained Transformer). With the help of generative models such as GPT (Generative Pre-trained Transformer), RAGNet will ensure dialogue responses that are coherent and nuanced. It will start with healthcare and legal-financial services domains to improve conversational AI’s natural language understanding and performance by refining retrieval and generation components. In initial phase, the focus will be on enhancing retrieval and generation elements in order to improve natural language understanding for conversational AIs in health care and legal-financial services sectors.
  27. DiffusionNet: Semantic Image Segmentation with Diffusion Models This project will explore semantic image segmentation through the adoption of cutting-edge diffusion models. By employing diffusion-based algorithms, the system will seamlessly propagate information across image pixels, enabling the creation of precise segmentation masks. Utilizing self-attention mechanisms and multi-scale processing capabilities, DiffusionNet can achieve good accuracy and efficiency in segmenting objects and semantic regions within complex images. This project will explore diverse architectures and training methodologies to optimize the performance and efficacy of diffusion-based segmentation networks.
  28. DeepMediShield : DeepFakes Detection in Medical Informatics The objective of the project is to address the emerging threat of DeepFakes in medical informatics by developing robust detection and mitigation strategies. Through the utilization of advanced deep learning techniques and image forensics algorithms, the system will identify and flag manipulated medical images and videos. By raising awareness and fostering resilience against DeepFakes, the project will aim to safeguard the integrity and trustworthiness of medical data and imagery in clinical settings.
  29. Artificial Intelligence -driven Smart Multi-Functional Blind Assistive Device [AI+Hardware] This project will aim to enhance accessibility through AI-driven advancements in blind assistive devices. By integrating state-of-the-art computer vision and natural language processing techniques, the system will provide real-time assistance to visually impaired individuals. Advanced object recognition and spatial awareness algorithms will empower the device to offer enhanced navigation and interaction capabilities, fostering greater independence and autonomy for visually impaired users.
  30. Smart Wearable: Real-Time Patient Monitoring with Generative AI in Medical Informatics [AI+Hardware] The primary goal of the project is the development of a smart wearable device for real-time patient monitoring using generative AI in medical informatics. Using the power of deep learning algorithms, the wearable will continuously analyze physiological data streams to detect anomalies and predict potential health risks. By incorporating generative AI models, the device will generate personalized health insights and recommendations, enabling proactive healthcare interventions and improving patient outcomes.
  31. A GPT powered speech therapy device for rehabilitation of after stroke patients. [AI+Hardware] The GPT-powered speech therapy device can emerge as an effective tool for aiding stroke survivors in their journey to regain communication skills. Utilizing advanced natural language processing capabilities, this device will harness the power of AI to create personalized therapy sessions, tailored to the unique challenges faced by each patient. It will be designed to provide interactive, engaging exercises which will not only facilitate the improvement of speech and cognitive functions but will also offer emotional support, adapting to the patient's progress. This AI companion will be engineered to work alongside healthcare professionals, offering a flexible and accessible approach to speech therapy that will empower patients in the comfort of their own homes, ensuring continuity and consistency in their rehabilitation efforts.
  32. AI-powered real-time haptic feedback system to ensure the correctness of the workout sessions. [AI+Hardware] The AI-powered real-time haptic feedback system will be engineered to maximize the efficiency of workout sessions. By integrating artificial intelligence with tactile response technology, this innovative system will offer immediate corrections on posture and form, ensuring exercises are performed with precision. It can act as a virtual coach, not just guiding users through their routines but also minimizing the risk of injury due to improper technique. This intelligent fitness companion will adapt in real-time, providing personalized adjustments that are felt rather than seen, catering to the nuances of individual body mechanics. Whether in a high-powered gym session or a focused home workout, this system will promise to enhance the effectiveness of physical training by seamlessly blending technology with human movement.
  33. BlindVision: Enabling Blinds to Interpret Scenes using Large Language Models BlindVision will aim to design a system to empower individuals with visual impairment through real-time scene interpretation. Using state-of-the-art language models, BlindVision will provide users with comprehensive verbal descriptions of their surroundings, enabling independent navigation and interaction. This innovative solution will aim to enhance spatial awareness and foster autonomy for the visually impaired, ultimately improving their quality of life and facilitating greater inclusion in society. Through rigorous development and testing, BlindVision will emerge as a promising tool for transforming the everyday experiences of individuals with visual impairment.
  34. Development of Web application for MentalHealth Counselling using Large Language Models This project will focus on the creation of a web application tailored for mental health counseling, harnessing the capabilities of open-source Large Language Models (LLMs). The primary goal is to develop a user-friendly platform that will utilize the advanced natural language processing abilities of LLMs to offer effective and accessible mental health support to individuals seeking assistance. Through the integration of cutting-edge technology and empathetic design, the web application will aim to provide a safe and supportive environment for users to express their thoughts and emotions while receiving personalized guidance and counseling. This innovative approach has the potential to significantly enhance the accessibility and effectiveness of mental health services, ultimately contributing to improved well-being and resilience in the community.
  35. DiseaseGPT: Explaining Symptom for Diseases Deploying Webapps with Large Language models This project will leverage the capabilities of Large Language Models (LLMs) to elaborate symptoms associated with a wide range of diseases. The primary objective of the project is to develop a sophisticated system capable of providing accurate and comprehensive explanations of symptoms, facilitating improved understanding and communication between healthcare professionals and patients. By harnessing the power of LLMs, DiseaseGPT seeks to address the complexity and variability inherent in disease presentations, offering personalized insights tailored to individual cases. Through rigorous development and validation, DiseaseGPT aims to become a valuable tool for enhancing medical education, diagnostic accuracy, and patient care in both clinical and educational settings.
  36. SecureFace: Design a Tool to Prevent Adversarial Attacks on Face Authentication Systems SecureFace is aimed at fortifying face authentication systems against adversarial attacks, a critical security concern in the realm of deep learning. Adversarial attacks exploit vulnerabilities in deep neural networks by introducing imperceptible perturbations to input data, aiming to deceive the authentication system. This abstracts to the domain of face authentication, where adversaries manipulate facial images to gain unauthorized access. Utilizing advanced techniques such as adversarial training, SecureFace will systematically identify and neutralize these vulnerabilities, offering proactive defense measures to safeguard sensitive systems and data. By enhancing the robustness of face authentication mechanisms against adversarial threats, SecureFace will significantly contribute to bolster security and trust in these systems across various applications.
  37. AI-SurveillanceNet: Intelligent Occlusion Handling in Video Surveillance This project will use advanced AI techniques to develop a robust and intelligent video surveillance system for vehicle tracking in the presence of occlusions. Utilizing deep learning methods, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), AI-SurveillanceNet aims to improve the accuracy and efficiency of occlusion handling. The system will integrate real-time object detection and tracking algorithms to retain features during occlusions and ensure continuous tracking. By employing self-attention mechanisms and predictive modeling, the project will enhance the ability to anticipate and manage complete occlusions. AI-SurveillanceNet will be tested in diverse real-world scenarios to validate its performance under various noise levels and illumination conditions, ultimately aiming to provide a more reliable and computationally efficient solution for smart video surveillance systems.
  38. PainBuddy: AI-Driven Multimodal Pain Management App This project aims to develop PainBuddy, a multimodal app designed to enhance pain management through advanced AI techniques. In the first stage, users will input various pain parameters through a user-friendly interface, which the AI/ML model will process to recommend optimal diagnostic tests, such as MRI or CT scans, based on the collected data. In the second stage, the app will integrate advanced AI/DL models to analyze image data (e.g., MRI, CT scans) and diagnose possible categories of abnormalities. Additionally, PainBuddy will feature a Retrieval-Augmented Generation (RAG) model integrated with a Large Language Model (LLM) trained on pain-related literature, enabling users to interact with the app through natural language to receive diagnostic suggestions and other pain management advice.
  39. BioMarkerAI: Predictive Modeling with Multi-Omics Biomarker Data This project will focus on developing advanced predictive models for disease prognosis using multi-omics biomarker data. By integrating genomic, proteomic, and metabolomic data, BioMarkerAI aims to create comprehensive models that can predict disease outcomes with high accuracy. Utilizing cutting-edge machine learning techniques, such as ensemble learning and deep neural networks, the system will identify key biomarkers and their interactions, providing insights into disease mechanisms and potential therapeutic targets. The project will also explore the implementation of interpretability methods to enhance the clinical relevance and trustworthiness of the predictive models.
  40. MedImageNet: Enhanced Diagnostic Imaging with AI MedImageNet aims to revolutionize medical imaging diagnostics through the application of advanced artificial intelligence techniques. By utilising convolutional neural networks (CNNs) and transformer architectures, the project will develop models capable of accurately detecting and classifying various medical conditions from imaging data such as MRI, CT scans, and X-rays. The system will integrate multi-scale feature extraction and attention mechanisms to improve diagnostic precision and speed. This project will explore deep learning and data augmentation strategies to enhance model robustness and generalizability, ultimately aiming to assist radiologists in making more informed and timely diagnoses.
  41. DeepLandNet: Advanced Deep Learning for Accurate Land Cover Classification. This project will focus on developing a deep learning-based system for land cover classification, aiming to accurately identify and categorize different types of land covers such as urban areas, agricultural fields, rangelands, forests, and water bodies. Utilizing high-resolution satellite imagery and advanced convolutional neural networks (CNNs), the system will be trained to recognize and differentiate various land cover types with high precision. The proposed approach will incorporate multi-scale feature extraction and attention mechanisms to enhance the model's ability to capture spatial hierarchies and intricate patterns in the data. By implementing and comparing various deep learning architectures, this project seeks to improve the accuracy and efficiency of land cover classification, providing valuable insights for urban planning, environmental monitoring, and resource management.
  42. AI-powered Forest Fire Detection and Prediction using Satellite Imagery This project aims to harness the power of artificial intelligence to enhance the detection and prediction of forest fires using satellite imagery. By integrating advanced deep learning algorithms and computer vision techniques, the system will analyse vast amounts of satellite data to identify early signs of forest fires, such as smoke and thermal anomalies. The AI model will be trained to recognize patterns and predict potential fire outbreaks, enabling proactive measures to prevent large-scale destruction. This project will focus on developing a robust and efficient AI-based monitoring system that can provide real-time alerts and accurate predictions, ultimately helping to mitigate the devastating impact of forest fires on ecosystems and communities.
  43. AI-driven Quality Control and Defect Detection in Manufacturing of Printed Circuit Boards for Improved Product Quality. The objective of this project is to enhance the quality control processes in the manufacturing of printed circuit boards (PCBs) through the implementation of AI-driven defect detection systems. Utilizing state-of-the-art machine learning algorithms and computer vision technology, the system will automatically inspect PCBs for defects such as soldering errors, component misplacements, and surface anomalies. By integrating AI into the quality control workflow, the project aims to achieve higher accuracy and efficiency in defect detection, reducing the need for manual inspection and minimizing the risk of faulty products. This innovative approach will lead to improved product quality, reduced production costs, and increased customer satisfaction in the electronics manufacturing industry.
  44. AI-powered Solar Panel Condition Monitoring and Cleaning Optimization. This project aims to develop an AI-based system for monitoring the condition of solar panels and optimizing cleaning schedules to enhance their efficiency and longevity. By utilizing a comprehensive dataset containing images of solar panels with various conditions such as clean, dusty, bird drops, electrical damage, physical damage, and snow-covered surfaces, the project will employ machine learning classifiers to accurately detect and categorize the state of solar panels.
  45. MedAI Assist: AI-powered Medical Information and Query Response System Using the open-source LLM model Llama2, this project aims to develop an advanced AI-driven tool that provides accurate and reliable medical information by answering user queries. Leveraging state-of-the-art language models and vector stores, this system will revolutionize the way patients, healthcare professionals, and researchers access medical knowledge.
  46. AIPMS: Design and Development of an AI-enabled Physiotherapy Monitoring System In this project, the students will collect data from wearable sensors and video camera related to physiotherapy exercises. Tasks such as identification of type of exercise, measuring duration of the exercise, maintaining a count of the number of times the exercise has been performed and comparing the extend of limb motion as compared to previously performed exercise, will be carried out using multi-modal deep learning architectures. Multi-head deep convolutional neural networks, with attention will be designed, and their performance will be evaluated deterministically.
  47. CUDeepNet: Deep Learning framework for Cricket Umpire Gesture Recognition In this project, the students will work on data from wearable sensors and video camera related to cricket umpire hand gestures. Tasks such as identification of type of hand gesture will be carried out using multi-modal deep learning architectures consisting of convolutional neural networks with attention. Transfer learning approach will be utilized to improve model performance. Additionally, deep neural networks may be employed to compare the performed hand gesture with an accurately performed hand gesture to design an AI-based cricket umpire training mechanism.
  48. Cancer Diagnosis/Screening/Detection Using Deep Learning In this project deep learning based method will be used in the diagnosis of different types of cancer. By utilising advanced neural networks, deep learning models can analyze complex patterns in medical images with high precision and accuracy. These models are capable of detecting anomalies and distinguishing between benign and malignant tissues, facilitating early diagnosis and improving treatment outcomes. Deep learning-based method offers the potential to enhance diagnostic processes, reduce human error, and provide reliable, automated cancer screening solutions, ultimately contributing to better patient care and management.
  49. AI-enabled Mobile application development for Identification of different Species of Sugarcance Develop an AI-enabled mobile application that accurately identifies and distinguishes between a hundred different species of sugarcane based on various distinguishing features. Utilizing advanced image recognition and deep learning algorithms, the app can analyze leaf patterns, stem color, and texture, along with other morphological characteristics. This tool will provide users, such as farmers and botanists, with instant, reliable species identification, aiding in research, cultivation, and management of sugarcane varieties.
  50. Pathalogical Images assessment using Artificial Intelligence The accurate and timely assessment of pathological images is essential for effective disease diagnosis and treatment planning. However, manual analysis of these images by human pathologists is time-consuming, subjective, and prone to inter-observer variability, leading to potential diagnostic errors and delays in patient care. To address these challenges, there is a pressing need to develop an AI-powered system capable of automating the analysis of pathological images, providing accurate and reliable diagnostic insights to healthcare professionals.
  51. Multi-Signal Diagnostic System: AI-Driven Screening Using Physiological Characteristics The objective of this project is to develop an advanced diagnostic and screening system utilizing multiple physiological signals such as EEG, EMG, ECG, and PCG. By integrating deep learning models, this system will analyze complex patterns within these signals to accurately diagnose various medical conditions. The project will focus on creating a robust framework that can handle the heterogeneity and variability of different physiological data, ensuring high accuracy and reliability in screening and diagnostics. This AI-driven approach aims to enhance early detection and improve patient outcomes by providing comprehensive and timely insights based on multi-signal analysis.
  52. AI-Powered Agricultural Diagnostic System: Multi-Characteristic and Image-Based Crop Disease Prediction The goal of this project is to develop an AI-driven diagnostic system for agriculture that explores multiple characteristics or image data to classify crops and predict diseases. By utilizing advanced deep learning techniques, the system will analyze diverse data inputs, including soil properties, weather conditions, and high-resolution images of crops. The project will focus on creating a comprehensive model that can accurately identify crop types and detect early signs of diseases, enabling timely interventions. This innovative approach aims to improve agricultural productivity and crop health by providing precise and actionable insights to farmers and agricultural experts.
  53. UrbanNet: Semantic Segmentation of Urban Scenes using High-Resolution Aerial Imagery The increasing use of autonomous drones for various applications, including surveillance, delivery, and inspection, necessitates advanced systems for safe and efficient navigation in urban environments. Traditional methods often struggle with the complexity and variability of urban landscapes, leading to potential safety hazards. This project, titled "UrbanNet," aims to address these challenges by leveraging high-resolution aerial imagery and advanced artificial intelligence models, including Generative Adversarial Networks (GANs) and attention mechanisms. The project will utilize the Semantic Drone Dataset, comprising high-resolution images to train and evaluate the proposed models. The ultimate goal is to develop a robust framework that significantly improves the semantic understanding capabilities of autonomous drones, thereby enhancing their safety and operational efficiency in urban environments.

  54. SatNet: Deep Learning based, Automated Airplane Detection using Satellite Imagery The "SatNet" project addresses the complex task of detecting airplanes in satellite images, which has significant applications such as monitoring airport activity, analyzing traffic patterns, and enhancing defense intelligence. The dataset comprises 32,000 20x20 RGB images, labeled as "plane" or "no-plane," extracted from PlanetScope imagery over airports. To solve this problem, we will develop and train deep learning models to accurately classify these images, leveraging the structured dataset and its detailed metadata. By automating airplane detection, SatNet aims to streamline the processing of satellite imagery, thereby enhancing operational efficiency and providing timely, actionable insights across various domains.
SNo. Subject
1 Artificial Intelligence based diagnosis of Cardiac Health using Machine Learning and Deep Learning Methods.
3 Artificial Intelligence based Brain-Computer Interface for Diagnosis of Alzheimer Disease.
4 To develop AI models based on DL and ML for Brain-Computer Interface for Diagnosis of Epilepsy Disease.
5 AI Based Virtual Gaming Application
6 Artificial Intelligence-based Brain-Computer Interface for Diagnosis of Parkinson's Disease using Machine Learning and Deep Learning Methods.
7 Automated Plant Leaf Disease Identification Using Artificial Intelligence using Machine Learning and Deep Learning Methods.
8 Generation of Super - Resolved Aerial Images for Military Applications
9 Deep Learning on Edge device with TinyML
10 Artificial Intelligence based Fake Currency Detection.
11 To develop AI Models to detect Skin Disease using DL and ML.
12 Artificial Intelligence-enabled Wearable Helmet for Driver Safety and Security
13 Medical Imaging : Breast Cancer identification using AI models from USG images.
14 Artificial Intelligence for automatic detection for Tomato/Bajra/Cotton Leaf Disease Detection (Viral / Fungal / Bacterial diseases).
15 Design methods of Fake Image Generation using Generative Artificial Intelligence
16 Enabling Profound Emotional Insight using AI based Enhanced Deep Learning for Emotion Classification from EEG Brain Signals
17 Deep Learning models for ECG based Heartbeat Analysis - low frequency time series signal analysis using AI models.
18 Enhancing Bladder Cancer Management through Deep Learning Techniques for Predicting Recurrence and Optimizing Treatment Strategies
19 Metamorphic Insights into Sleep Disorder Forecasting and Categorization using Deep Learning based AI Model
20 Acute Myeloid Leukemia Subtype Classification through Deep Learning and Single-Cell Images
21 Unveiling Novel Deep Learning Approach for Accurate Lung Cancer Subtype Classification
22 Emerging Insights in Panic Disorder: A Sophisticated AI Framework for Enhanced Classification and Prognastication
23 Innovative Artificial Intelligence powered Approach to Classify Erythemato-Squamous Disease
24 Revolutionizing Lending Decisions by Unveiling an AI-Powered Paradigm for Loan Approval Prediction
25 Harnessing Deep Learning AI for Brain Tumor Classification using MRI Images
26 Facial Expression Recognition of Pets using Deep Learning Based AI Model
27 Leveraging Deep Learning Based AI Model for Forecasting Agricultural CO2 emissions
28 Elevating Industry Excellence by Pioneering Deep Learning based AI Model for Precise Induction Motor Fault Classification
29 Innovative AI Paradigm for Pioneering Kidney Disease Classification through Advanced Deep Learning Model
30 Cutting-Edge Deep Learning Approach for Precise Detection and Classification of Oral Cancer
31 Empowering Experts with Deep Learning for Mango Leaf Disease Detection and Classification
32 Advancing Jackfruit Leaf Disease Detection and Classification through Deep Learning based AI Model
33 Deep Learning based AI Model for Rose Leaf Disease Identification and Classification
34 Advanced Deep Learning based AI Model for Precise Soya Bean Disease Detection
35 A Sophisticated Deep Learning based AI Framework for Enhanced Road Detection Leveraging Remote Sensing Technology
36 Innovative Deep Learning based AI-driven Framework for Enhanced Lung Cancer Detection and Categorization
37 Cutting-edge AI Approach for Accurate Skin Cancer Classification Utilizing Deep Learning Models
38 Unveiling a Revolutionary Deep Learning Approach for Precise Breast Cancer Identification and Classification
39 Advanced Multi-Class Sports Image Classification Utilizing State-of-the-Art Deep Learning AI Models
40 Utilizing Advanced Deep Learning Techniques for Taxonomic Classification of Avian Species
41 Utilizing Deep Learning based AI Model for Enhanced Military Aircraft Detection and Identification
42 Taxonomic Categorization of Butterfly Varieties through Advanced Deep Learning Techniques
43 Advancing Scientific Inquiry by Employing Deep Learning for Accurate Identification and Classification of Human Activities
44 Pioneering Pneumonia Detection and Classification Utilizing Advanced Deep Learning AI Framework
45 Deep Learning based AI Approach for the Precise Identification and Classification of Mpox Disease
46 Enhancing Financial Predictions by Leveraging Deep Learning Techniques for Google Stock Price Projection
47 Automated Tuberculosis Detection from Chest X-rays using Deep Learning based AI Model
48 A Deep Learning based AI Approach to Human Face Emotion Classification
49 Driver Drowsiness Detection using Deep Learning based AI Model
50 A Novel Deep Learning Approach for Grape Disease Identification and Classification
51 Identification and Classification of Nitrogen Deficiency in Rice Crop using Deep Learning based AI Model
52 A Deep Learning Model for the Identification and Classification of Pseudopapilledema
53 Cardiomegaly Disease Identification with Deep Learning-based Artificial Intelligence
54 A Deep Learning AI Model for Efficient and Accurate Biodegradable and Non-Biodegradable Material Classification
55 Utilizing Deep Learning AI Model for Accurate Identification and Classification of Glaucoma
56 Deep Learning based Artificial Intelligence Model for Precise Categorization of Bone Marrow Cells
57 Application of Deep Learning AI Model for the Identification of Medicinal Plant Leaves
58 Categorization of Oral Conditions through an AI Model Grounded in Deep Learning
59 Deep Learning-Enabled AI Model for Precise Segmentation of Dental X-rays
60 Categorization of Cardiac Rhythm Sounds through an AI Model Grounded in Deep Learning
61 Application of Advanced Deep Learning based AI Techniques for Satellite Image Classification
62 Forecasting Diabetes through a State-of-the-Art Artificial Intelligence Model Driven by Deep Learning
63 Image based Classification of Hazardous Agricultural Insects through an AI Model Driven by Deep Learning
64 Enhancing Elderly Activity Detection and Recognition through a Deep Learning-Based Artificial Intelligence Framework
65 Application of Deep Learning AI Models for the Identification and Categorization of Plant Pathogens
66 Contemporary Categorization of Fruits and Vegetables as Fresh or Spoiled via an AI Model Built on Deep Learning
67 Image based Categorization of Marine Fauna through an AI Model Driven by Deep Learning
68 Deep Learning-Based AI Model for Categorizing Sea Corals through Image Classification
69 Detection and Segmentation of Ships/Vessels in Aerial Images using Deep Learning based AI Model
70 Categorization of Facial Appeal through an AI Model Employing Deep Learning Techniques
71 Innovative Application of Deep Learning AI for Microscopic Fungi Categorization through Image-Based Classification
72 Pest Detection and Categorization in Coconut Foliage through an AI Model Driven by Deep Learning Techniques
73 Utilizing Advanced AI Techniques for Enhanced Detection of Credit Card Fraud through Deep Learning Models
74 Contemporary Categorization of Architectural Styles through AI-Driven Deep Learning Model
75 Utilizing Advanced AI Technology for the Identification of Down Syndrome in Pediatric Population
76 Diverse Airborne Object Detection via an AI Model Built on Deep Learning Techniques
77 Advanced Application of AI-Driven Deep Learning Model for Kidney Stone Detection and Prognosis
78 Deep Learning-Powered AI Model for Classifying Yoga Pose Images
79 Utilizing Advanced AI-driven Deep Learning for Enhanced Food Categorization
80 Application of Deep Learning AI Model for Categorization of Mechanical Tools
81 Analyzing the Impact of Social Media on Mental Health using Deep Learning based AI Model
82 X-ray-based AI Framework for Enhanced Bone Fracture Identification: A Paradigm Shift in Diagnostic Excellence
83 Metamorphosing Human Persona through Facial Imagery using Advanced AI Modelling
84 Predicting Delivery Time for Logistics Organization using Deep Learning based AI Model
85 Envisioning a Paradigm Shift by using a Sophisticated AI Approach for Accurate Groundwater Level Prediction
86 Enhancing Agricultural Productivity through Advanced AI-based Crop Yield Forecasting
87 Elevating Air Quality Prediction: A Paradigm Shift in Time Series Forecasting using Advanced AI Models
88 Envisioning a New Era by using AI-Infused Marketing Strategies for Financial Excellence
89 Reimagining Blood Cell Cancer Diagnosis and Categorization through Advanced AI Modelling
90 Enabling Precise Categorization and Diagnosis of Structural Anomalies through Deep Learning-Powered AI Framework
91 Reimagining Track Classification for Bullet Cartridges through Advanced Deep Learning Techniques
92 Next-Generation Deep Learning based AI Model for Precise Indian Sign Language Gesture Recognition
93 Forecasting Vehicle Carbon Emissions with an Advanced Deep Learning AI Paradigm
94 Revitalizing Environmental Preservation: A Deep Learning Approach for Accurate Plastic Waste Detection and Classification
95 An AI-Enhanced Approach for Precise Identification and Categorization of Scoliosis and Spondylolisthesis from Vertebrae X-ray Imagery
96 Propelling QSAR Molecular Descriptor Prediction through State-of-the-Art Deep Learning AI Modeling
97 Transformative AI Approach for Enhancing QSAR Bioconcentration Classification
98 Elevating Traditional Attire Identification through State-of-the-Art Deep Learning AI Model
99 Enhancing Diabetic Foot Classification through Thermography Image Analysis using Deep Learning
100 Elevating Cataract Detection and Identification through Cutting-Edge Deep Learning AI Models
101 Advanced AI-driven Model for Accurate Detection and Categorization of Cow Lumpy Disease
102 Diatom Detection for forensic Science, Segmentation and Classification using Deep Learning based AI Model
103 AI-based Model for the Development for Brain Stroke detection using CT Scan Images
104 Brain Haemorrhage detection using CT Scan Images using MRI Images
105 Deep Learning based Lung Cancer nodules detection using radiographs
106 Deep Learning based Kidney diseases prediction using radiographs
107 Grading the severity level of osteoarthritis using normal radiographs based on Deep Learning