At the Amity Centre for Artificial Intelligence, we are at the forefront of pioneering research that pushes the boundaries of AI and its real-world applications...

This section highlights some of our latest AI-driven innovations, showcasing groundbreaking research, novel methodologies, and impactful applications that are shaping the future.

Title of the work: BMFCNet: Blended Multilevel Features With Constraint Fusion Network for Depression Detection From EEG Signals

Research Area: Deep Learning and Brain-computer interfaces

 

Gautam Verma

B.Tech CSE Student (2020-24) Amity School of Engineering & Technology

Prof. M.K. Dutta

Amity Centre for Artificial Intelligence.

  • AI for Mental Health: This research presents an advanced deep learning model, BMFCNet, designed for the accurate identification of Major Depressive Disorder (MDD) using EEG signals.
  • Multi-Level Feature Extraction: The model integrates high-level (HL) and low-level (LL) EEG features through a Constraint Fusion Network, improving classification accuracy.
  • Innovative Processing: EEG signals are analyzed using a Residual-Inception module that captures essential discriminative characteristics for effective depression detection.
  • Practical Application: The model was tested on two benchmark EEG datasets, demonstrating superior accuracy compared to 16 state-of-the-art methodologies.
  • Real-World Impact: This approach enhances the potential for AI-driven mental health diagnostics, offering a scalable and cost-effective solution for early depression detection.

  • M. Karnati, G. Sahu, G. Verma, A. Seal, M. Kishore Dutta and J. Jaworek-Korjakowska, "BMFCNet: Blended Multilevel Features With Constraint Fusion Network for Depression Detection From EEG Signals," in IEEE Transactions on Instrumentation and Measurement, vol. 74, pp. 1-14, 2025, Art no. 2511414, doi: 10.1109/TIM.2025.3545204. (Impact Factor: 5.6)

Title of the work: Optimized Inverse Kinematics Modelling and Joint Angle Prediction for Six-Degree-of-Freedom Anthropomorphic Robots with Explainable AI

Research Area: Machine Learning and Robotics

 

Rakesh Chandra Joshi

Amity Centre for Artificial Intelligence

Prof. M.K. Dutta

Amity Centre for Artificial Intelligence.

  • Machine Learning for Inverse Kinematics: The paper introduces a novel machine learning-based approach to solve the inverse kinematics problem for six-degree-of-freedom (six-DoF) anthropomorphic robots. This data-driven method simplifies complex mathematical formulations traditionally required.
  • Bayesian Optimization for Model Tuning: The study employs Bayesian optimization to fine-tune the hyperparameters of machine learning models, enhancing their accuracy and computational efficiency.
  • High Performance and Efficiency: The selected regression model achieves exceptional performance, with an average mean squared error (MSE) of 1.934 × 10?³ to 3.522 × 10?³ and prediction times of about 1.25 ms per sample. This makes the approach suitable for real-time applications.
  • Explainable AI with SHAP: The research incorporates Explainable AI techniques using SHAP (SHapley Additive exPlanations) analysis, providing insights into feature importance, improving model interpretability, and reinforcing trust in the solutions.
  • Advancement in Anthropomorphic Robotics: By balancing computational efficiency and accuracy, the proposed model advances state-of-the-art solutions for robotic kinematics, offering practical automation applications in fields such as manufacturing, space exploration, and medical robotics.

  • Rakesh Chandra Joshi, J.K.Rai, Radim Burget & Malay Kishore Dutta, “ Optimized Inverse Kinematics Modelling and Joint Angle Prediction for Six-Degree-of-Freedom Anthropomorphic Robots with Explainable AI”  ISA Transactions, Elsevier Publications, DOI: doi.org/10.1016/j.isatra.2024.12.008, 2024, SCI Indexed Impact Factor – 6.4.

Title of the work: Breaking Barriers in Cancer Diagnosis: Super-Light Compact Convolution Transformer for Colon and Lung Cancer Detection

Research Area: Deep Learning, Computer Vision.

 

Dr. Ritesh Maurya

Amity Centre for Artificial Intelligence

  • Developed a novel, compact, and efficient convolution-transformer architecture named 'C3-Transformer' for the diagnosis of colon and lung cancers using histopathology images.
  • Introduced a convolutional tokenization and sequence pooling approach to reduce the number of parameters, addressing the challenge of limited data availability for these cancer types.
  • Combined the strengths of Convolutional Neural Networks (CNNs) for local feature extraction and transformers for global context understanding, enhancing classification performance.
  • Achieved impressive results on the ‘LC25000’ dataset with an average classification accuracy of 99.30%, precision of 0.9941, and recall of 0.9950 for classifying five different classes of colon and lung cancer.
  • Demonstrated the potential of the proposed C3-Transformer as an effective computer-aided detection system for early and accurate diagnosis of lung and colon cancers with minimal parameters (0.0316 million).

  • Ritesh Maurya, Nageshwar Nath Pandey, Mohan Karnati, Geet Sahu, Breaking Barriers in Cancer Diagnosis: Super-light Compact Convolution Transformer for Colon and Lung Cancer Detection, International Journal of Imaging Systems and Technology. Doi: https://doi.org/10.1002/ima.23154 [SCIE] (IF. 3.0)

Title of the work: Biomarker Profiling and Integrating Heterogeneous Models for Enhanced Multi-Grade Breast Cancer Prognostication

Research Area: Machine Learning/Computer-aided Diagnosis

 

Rakesh Chandra Joshi

Amity Centre for Artificial Intelligence

Prof. M.K. Dutta

Amity Centre for Artificial Intelligence.

  • This research marks the first exploration of an AI-based breast cancer prediction model integrating three critical biomarkers—β-hCG, PD-L1, and AFP.
  • This study utilizes multi-stage heterogenous ensemble learning techniques, including hyperparameter tuning with particle swarm optimization, for multi-graded breast cancer diagnosis.

  • Rakesh Chandra Joshi, P. Srivastava, R. Mishra, R. Burget, and M. K. Dutta, “Biomarker Profiling and Integrating Heterogeneous Models for Enhanced Multi-Grade Breast Cancer Prognostication,” Computer Methods and Programs in Biomedicine, p. 108349, Jul. 2024, doi: 10.1016/j.cmpb.2024.108349. (Impact Factor:4.9)

Title of the work: DriSm_YNet: a breakthrough in real-time recognition of driver smoking behaviour using YOLO-NAS

Research Area: Computer Vision, Deep Learning

 

Ritesh Maurya

Amity Centre for Artificial Intelligence

  • Utilized the HMDB5 video dataset with image enhancement techniques like histogram utilization and gamma correction.
  • Employed Haar Cascade and YOLO-NAS for detecting face, mouth, and eye regions of interest.
  • Used TransGAN-augmentation to mitigate underfitting due to occluded frame removal.
  • Achieved 96.5% accuracy in classifying smoking vs. non-smoking drivers with InceptionV3 and LSTM based on AUC-ROC and confusion metrics.

  • Nageshwar Nath Pandy, Avadh Pati & Ritesh Maurya, DriSm_YNet: a breakthrough in real-time recognition of driver smoking behavior using YOLO-NAS. Neural Comput & Applic (2024). https://doi.org/10.1007/s00521-024-10162-w (Impact Factor:4.9)

Title of the work: AI-SenseVision: A Low-cost Artificial Intelligence-based Robust and Real-time Assistance for Visually Impaired People

Research Area: Deep Learning / Assistive Device

 

Rakesh Chandra Joshi

Amity Centre for Artificial Intelligence.

Prof. M.K. Dutta

Amity Centre for Artificial Intelligence.

  • Developed a compact, handheld AI device for visually impaired individuals, alerting them to obstacles and identifying common objects in real-time using auditory cues.
  • Integration of deep learning object detection enables the device to identify objects and provide audio prompts, enhancing robustness in diverse conditions.
  • Emphasizing portability and affordability, the device features customized models on a lightweight framework with user-friendly interface for easy operation by visually impaired users.

  • Rakesh Chandra Joshi, N. Singh, A. K. Sharma, R. Burget and M. K. Dutta, "AI-SenseVision: A Low-Cost Artificial-Intelligence-Based Robust and Real-Time Assistance for Visually Impaired People," in IEEE Transactions on Human-Machine Systems, 2024, doi: 10.1109/THMS.2024.3375655.

Title of the work: FCCS-Net: Breast cancer classification using Multi-Level fully Convolutional-Channel and spatial attention-based transfer learning approach

Research Area: Computer Vision / Deep Learning / Machine Learning

 

Dr. Ritesh Maurya

Amity Centre for Artificial Intelligence.

Prof. M.K. Dutta

Amity Centre for Artificial Intelligence.

  • FCCS-Net: Multi-level attention-based transfer learning approach for breast cancer classification.
  • Achieved high accuracy diverse datasets such as BreakHis, IDC, and BACH.
  • Visual explanation of the attention layers and other layers using t-SNE plot.
  • Computationally efficient and lightweight with 0.023 GigaFLOPS and 2.22 million parameters.

  • Maurya, R., Pandey, N.N., Dutta, M.K., Karnati, M., FCCS-Net: Breast cancer classification using Multi-Level fully Convolutional-Channel and spatial attention-based transfer learning approach, Biomedical Signal Processing and Control,
  • Doi: https://doi.org/10.1016/j.bspc.2024.106258 March 2024, Elsevier Publishers, SCIE indexed Impact Factor: 5.076

Title of the work: VisionDeep-AI: Deep Learning-based Retinal Blood Vessels Segmentation and Multi-class Classification Framework for Eye Diagnosis

Research Area: Computer Vision / Deep Learning / Machine Learning

 

Rakesh Chandra Joshi

Amity Centre for Artificial Intelligence.

Prof. M.K. Dutta

Amity Centre for Artificial Intelligence.

  • The use of a bi-directional feature pyramid network and U-Net as backbone architecture and the development of a segmentation model that accurately segments blood vessels from fundus images.
  • Utilizing weighted feature fusion and bidirectional cross-scale linkages, multi-scale feature fusion attempts to merge features at several levels, enhancing feature vectors and improving the efficiency of feature extraction.
Inference: The integration of segmented vessel images with raw fundus images in a proposed multi-modal deep feature fusion network provides a more comprehensive understanding of potential abnormalities.

  • Rakesh Chandra Joshi, A.K.Sharma, M.K.Dutta, “VisionDeep-AI: Deep Learning-based Retinal Blood Vessels Segmentation and Multi-class Classification Framework for Eye Diagnosis”- Biomedical Signal Processing and Control, Elsevier Publishers, Accepted for Publication, 2024, SCI indexed Impact Factor - 5.1.

Title of the work: A Lightweight Meta-Ensemble Approach for Plant Disease Detection Suitable for IoT-Based Environments

Research Area: Computer Vision / Deep Learning / Machine Learning

 

Dr. Ritesh Maurya

Amity Centre for Artificial Intelligence.

  • The research introduces a novel methodology for plant disease diagnosis by combining MLP-Mixer and LSTM models with deep features into a two-tier meta-ensemble. This innovative approach enhances classification performance while remaining lightweight, addressing the need for efficient solutions in resource-constrained environments.
  • Recognizing the limitations of existing algorithms in resource-constrained settings, the proposed meta-ensemble is specifically designed for deployment on IoT devices. Its lightweight nature ensures efficient utilization of memory and computation power, making it ideal for automated plant disease diagnosis in agricultural settings with limited resources.
  • By integrating MLP-Mixer and LSTM models, the research harnesses their complementary capabilities, leading to improved classification performance. This integration enhances the model's ability to accurately diagnose plant diseases while maintaining efficiency, making it a valuable tool for precision agriculture and crop management.

  • R. Maurya, S. Mahapatra and L. Rajput, "A Lightweight Meta-Ensemble Approach for Plant Disease Detection Suitable for IoT-Based Environments," in IEEE Access, vol. 12, pp. 28096-28108, 2024, doi: 10.1109/ACCESS.2024.3367443. March 2024, Elsevier Publishers, SCIE indexed Impact Factor: 5.076

Title of the work: DeepRespNet: A Deep Neural Network for Classification of Respiratory Sounds

Research Area: Time series analysis / Deep Learning / Machine Learning

 

Dr. Rinki Gupta

Amity Centre for Artificial Intelligence.

Prof. M. K. Dutta

Amity Centre for Artificial Intelligence.

  • Novel 1D DeepRespNet and 2D DeepRespNet models are proposed to process time-series and spectrogram representations of pulmonary signals, for identification of lung diseases.
  • Recorded a dataset consisting of six categories of lung sounds namely, normal, aortic, wheezing, bronchial, crepitation and rhonchi.
  • Data was recorded using an electronic stethoscope and then annotated by skilled doctors.
Inference: The proposed 2D DeepRespNet model achieved an accuracy of 95.2%, which is more than that achieved by the SOTA approaches.

  • Rinki Gupta, Rashmi Singh, Carlos M. Travieso-González, Radim Burget, Malay Kishore Dutta, “DeepRespNet: A deep neural network for classification of respiratory sounds,” Elsevier's Biomedical Signal Processing and Control, Vol. 93, 2024, 106191, ISSN 1746-8094, DOI:10.1016/j.bspc.2024.106191.

Title of the work: Few-shot transfer learning for wearable IMU-based human activity recognition

Research Area: Time series analysis / Deep Learning /Machine Learning

 

Dr. Rinki Gupta

Amity Centre for Artificial Intelligence.

  • The fine-tuned on few-shots of self-recorded IMU data using the Reptile algorithm
  • The performance of the deep learning model fine-tuned using the Reptile algorithm for HAR is analysed with and without transfer learning.
  • The proposed FSTL approach yields an average classification accuracy of 74.86±0.71% and 79.20±1.05% for 3-way, 5-shot classification of new activities performed by a single user and same set of activities performed by a new user, respectively.

  • H. S. Ganesha, Rinki Gupta, Sindhu Hak Gupta, Sreeraman Rajan, "Few-shot transfer learning for wearable IMU-based human activity recognition", Neural Computing and Applications, Mar 2024, https://doi.org/10.1007/s00521-024-09645-7.

Title of the work: A TinyML solution for an IoT-based Communication Device for Hearing Impaired

Research Area: Deep learning/ Human machine interaction

 

Dr. Sneha Sharma

Amity Centre for Artificial Intelligence.

Dr. Rinki Gupta

Amity Centre for Artificial Intelligence.

  • A tiny machine learning (TinyML) solution is proposed for sign language recognition using a low-cost, wearable, internet-of-things (IoT) device.
  • A lightweight deep neural network is deployed on the edge device to interpret isolated signs from the Indian sign language using the time-series data.
  • The recognized sign is transmitted to a cloud platform in real-time.
  • A mobile application, SignTalk, is also developed, Additionally, text-to-speech conversion is also provided on SignTalk to vocalize the predicted sign for better communication.

  • Sharma, S., Gupta, R., & Kumar, A. (2024). A TinyML solution for an IoT-based Communication Device for Hearing Impaired. Expert Systems with Applications, 123147. https://doi.org/10.1016/j.eswa.2024.123147 SCI Indexed Impact factor 8.5, Accepted 2-Jan 2024

Title of the work: Improved content-based brain tumor retrieval for magnetic resonance images using weight initialization framework with densely connected deep neural network

Research Area: Computer Vision / Deep Learning / Machine Learning

 

Dr. Ritesh Maurya

Amity Centre for Artificial Intelligence.

  • Retrieving similar brain tumor MRI slices is hindered by the absence of class-specific features and the complexity introduced by multiple views, presenting a diagnostic challenge for CBMIR systems.
  • Innovative Weight Initialization Framework (WIF): The proposed WIF utilizes transfer learning for DenseNet models, strategically freezing initial layers. This ensures the preservation of rich low-level features, especially vital for challenging classes like Meningioma.
  • DenseNet Integration for Multi-Scale Learning: DenseNet models are integrated to enhance feature extraction, leveraging multi-scale learning. This contributes significantly to the overall improvement of the CBMIR system.
  • Performance Gains and Generalizability: The introduced approach outperforms state-of-the-art methods, exhibiting notable improvements. Joint application of DenseNet and WIF demonstrates increased performance, particularly benefiting challenging classes like Meningioma, showcasing the framework's generalizability.

  • Singh, V.P., Verma, A., Singh, D.K., Maurya, R. Improved content-based brain tumor retrieval for magnetic resonance images using weight initialization framework with densely connected deep neural network. Neural Comput & Applic (2023). https://doi.org/10.1007/s00521-023-09149-w . [SCIE] (IF. 6.00)

Title of the work: EMViT-Net: A novel transformer-based network for the classification of environmental microorganisms using microscopic images

Research Area: Computer Vision / Deep Learning / Machine Learning

 

Prof. M. K. Dutta

Amity Centre for Artificial Intelligence

  • The study introduces EMViT-Net, a vision transformer-based deep neural network that combines transformer and CNN architectures
  • It extracts multi-scale features from microscopic images and introduces a separable convolutional parameter-sharing attention block for robustness and efficiency.
  • Extensive experiments show EMViT-Net outperforms existing methods for environmental microbe classification, with an accuracy of 71.17%.
  • Classify seven emotions using five real-world datasets.
Inference: The EMViT-Net enhances the ability to capture local and global features, making it more robust in classifying environmental microbes.

  • Karnika Dwivedi, Malay Kishore Dutta & Jay Prakash Pandey, “EMViT-Net: A novel transformer-based network utilizing CNN and multilayer perceptron for the classification of environmental microorganisms using microscopic images” Ecological Informatics, Elsevier Publishers. DOI:  https://doi.org/10.1016/j.ecoinf.2023.102451  , December 2023, Elsevier Publishers, SCI indexed Impact Factor: 5.1.

Title of the work: Harnessing the power of AI Human-computer Interaction Based System for Facial Expression Recognition in-the-wild using AI

Research Area: Computer Vision / Deep Learning

 

Karnati Mohan

AI Scientist Amity Centre for Artificial Intelligence

  • A deep learning-based AI system has been developed for human-computer interaction.
  • Most relevant features extracted from five local regions with the help of residual and attention modules.
  • Ensemble learning is applied using Choquet fuzzy integral.
  • Classify seven emotions using five real-world datasets.
Inference: This research is based on affective computing which is a branch of computer science that aims to create instruments/ devices and systems that can detect, analyze, process, and imitate human emotions.

  • Karnati Mohan et.al. “Facial Expression Recognition in-the-wild using Blended Feature Attention Network” IEEE Transactions on Instrumentation & Measurement, DOI: 10.1109/TIM.2023.3314815, 2023, Impact Factor – 5.6.

Title of the work: Nocturnal Sleep Sounds Classification with Artificial Neural Network for Sleep Monitoring

Research Area: Time series analysis / Deep Learning

 

Dr. Rinki Gupta

Amity Centre for Artificial Intelligence

Prof. M. K. Dutta

Amity Centre for Artificial Intelligence

  • Develop a personal sleep monitoring system.
  • Classify seven categories of nocturnal human sounds from time series data.
  • Most relevant features extracted from spectrograms of sleep sounds and given to fully-connected Artificial Neural Network (ANN) to classify the sleep sounds.
Inference: The proposed ANN classifies the considered seven categories of sleep sounds, including coughing, laughing, screaming, sneezing, snoring, sniffling, and farting, with an average accuracy of 97.4%.

  • Pandey, C., Baghel, N., Gupta, R., & Dutta, M. K., “Nocturnal sleep sounds classification with artificial neural network for sleep monitoring,”  Multimedia Tools and Applications, 1-17., 2023 Impact Factor – 3.6.

Title of the work: MacD-Net: An automatic guided-ensemble approach for macular pathology detection using optical coherence tomography images

Research Area: Artificial Intelligence/Deep Learning / Machine Learning

 

Dr. Ritesh Maurya

Amity Centre for Artificial Intelligence

Prof. M. K. Dutta

Amity Centre for Artificial Intelligence

  • Different types of retinal disorders were detected using Artificial Intelligence.
  • Multi-modal approach has been applied for better performance.
  • Method was tested on vast dataset of optical coherence tomography images.
  • The proposed method would be helpful in automatic detection of retinal disorders.

  • Maurya, R., Pandey, N. N., Joshi, R. C., & Dutta, M. K. MacD-Net: An automatic guided-ensemble approach for macular pathology detection using optical coherence tomography images. International Journal of Imaging Systems and Technology. https://doi.org/10.1002/ima.22954
  • Impact Factor – 3.3.

Title of the work: Automatic Diagnosis of Neurological Disorders from Brain Waves Using Artificial Intelligence

Research Area: Brain-computer Interface / Deep Learning.

 

Karnati Mohan

Amity Centre for Artificial Intelligence.

Geet Sahu

Amity Centre for Artificial Intelligence.

  • Brain waves data is utilized to analyze neurological disorders, which records the variations in the neural dynamics of human memory.
  • An AI algorithm is employed to identify patterns and correlations in the data that may not be immediately apparent to human experts.
  • The prominent discriminatory features (CWT) were fed to Convolutional Neural Networks for identification of neurological disorder.

  • Geet Sahu, Karnati Mohan et.al. “SCZ-SCAN: An automated Schizophrenia detection system from electroencephalogram signals” Biomedical Signal Processing and Control, DOI: https://doi.org/10.1016/j.bspc.2023.105206, 2023, Impact Factor – 5.1.

Title of the work: Visibility Restoration of Hazy Images using Computer Vision and Artificial Intelligence

Research Area: Computer Vision / Deep Learning

 

Geet Sahu

Amity Centre for Artificial Intelligence.

  • Image dehazing is crucial in computer vision-based applications such as surveillance, object detection, etc.
  • A unique deep neural network-based attention model is designed to restore clear images from its counterpart hazy image.
Inference: The proposed DL model restore more details and reduce artifacts in dehazed images.

Geet Sahu et al. "Single Image Dehazing via Fusion of Multi-level Attention Network for Vision-Based Measurement Applications." IEEE Transactions on Instrumentation and Measurement. DOI: 10.1109/TIM.2023.3271753, 2023, Impact Factor – 5.6.

Title of the work: Study on Mobile Phone EMF Radiation Effects on Brain using Artificial Intelligence

Research Area: Computer Vision / Deep Learning / Machine Learning

 

M.K. Dutta

Amity Centre for Artificial Intelligence

Tanu Jindal

Amity Institute of Environmental Toxicology, Safety and Management

  • A Novel pilot study using Computer Vision and Artificial Itelliegcne to identify changes in brain morphology under EMF exposure considering drosophila melanogaster as a specimen.
  • The prominent discriminatory features were fed to Deep Neural Networks and also different Machine learning classifiers: SVM, Naïve Bayes, Artificial Neural Network and Random Forest
  • Experimental results indicate good classification results up to 94.66% using discriminatory features selected by feature selection method - indicating changes in the brain of the exposed.
Inference: Marked changes observed in the Brain of the Exposed Drosophila brain images

  • A.Singh, Tanu Jindal, M.K. Dutta et.al. "A Novel Pilot Study of Automatic Identification of EMF Radiation Effect on Brain using Computer Vision and Machine Learning" Biomedical Signal Processing and Control, DOI : 10.1016/j.bspc.2019.101821, 2020, Elsevier Publishers, Impact Factor – 5.076.
  • R.Maurya, Tanu Jindal, M.K. Dutta et.al., "Machine Learning based Identification of Radiofrequency Electromagnetic Radiation (RF-EMR) effect on Brain Morphology: A Preliminary Study" Medical and Biological Engineering & Computing, Springer Nature Publishers- DOI: 10.1007/s11517-020-02198-6, 58(8), pp. 1751-1765, 2020, Impact Factor – 3.097.
  • Ritesh Maurya, Neha Singh, Tanu Jindal, Vinay K Pathak, Malay Kishore Dutta, "Computer-Aided Automatic Transfer Learning based Approach for Analysing the Effect of High-Frequency EMF Radiation on Brain" Multimedia Tools and Applications, Springer Nature Publishers, DOI: 10.1007/s11042-020-10204-0, 2020, SCI indexed Impact Factor - 2.577.

Title of the work: Automatic Diagnosis of Schizophrenia from Electroencephalography Signals Using Artificial Intelligence

Research Area: Time series analysis / Deep Learning

 

Karnati Mohan,

AI Scientist Amity Centre for Artificial Intelligence

Geet Sahu

AI Scientist Amity Centre for Artificial Intelligence

  • Develop a neurological disease detection system.
  • Classify Schizophrenia patients from Healthy controls using EEG signals.
  • Most relevant features extracted from the SZ generated images and given to CNN network.
  • Network classified robustly Schizophrenia patients from Healthy controls.
Inference: Brain-computer Interface and Deep Learning based method for diagnosis of Schizophrenia from electroencephalography signals.

Geet Sahu, K.Mohan et.al. “A Pyramidal Spatial-based Feature Attention Network for Schizophrenia Detection using Electroencephalography Signals” IEEE Transactions on Cognitive and Developmental Systems, doi: 10.1109/TCDS.2023.3314639, 2023, Impact Factor – 5.

Title of the work: Transfer Learning techniques for practical sign language recognition (SLR) system

Research Area: Deep Learning / Time Series Analysis/ Transfer Learning

 

Dr. Rinki Gupta

Amity School of Engineering and Technology Amity University Uttar Pradesh

Sneha Sharma, JRF

Amity School of Engineering and Technology Amity University Uttar Pradesh

  • A novel ensemble of machine learning models, TrBaggBoost for handling subject variability in SLR systems
  • A novel convolutional neural network, long short-term memory and connectionist temporal classification based continuous time-series data analysis of end-to-end classification
  • Novel Multi-label classification approach for lexicon-based SLR
  • Experimental results shown with self-recorded sensor database of 100 signs from the Indian sign language

  • Sneha Sharma, Rinki Gupta, Arun Kumar, "Trbaggboost: An ensemble-based transfer learning method applied to Indian Sign Language recognition", Journal of Ambient Intelligence and Humanized Computing, Springer, 13, 3527–3537 (2022)., pp. 1-11, 27 May 2020, DOI: 10.1007/s12652-020-01979-z, SCI Indexed, Impact Factor 4.764
  • Sneha Sharma, Rinki Gupta, Arun Kumar, "Continuous Sign Language Recognition using Isolated Signs data and Deep Transfer Learning", Journal of Ambient Intelligence and Humanized Computing, Springer, pp. 1-12, Online First, 2021, https://doi.org/10.1007/s12652-021-03418-z, SCI Indexed, Impact Factor 4.764
  • Rinki Gupta, Arun Kumar, "Indian Sign Language Recognition using Wearable Sensors and Multi-label Classification", Computers & Electrical Engineering, Elsevier, vol. 90, p. 106898., 2021, https://doi.org/10.1016/j.compeleceng.2020.106898, SCI Indexed, Impact Factor 4.586

Title of the work: Segmentation and Quantification of Cardiac MRI for diagnosis of CVDs.

Research Area: Computer Vision / Deep Learning / Machine Learning

 

Anupma Bhan

Amity School of Engineering and Technology

Dr. ParthaSarathi Mangipudi

Amity School of Engineering and Technology

  • Integrated approach of deep learning and adaptive deformable flow model with no user interaction for LV initialization. CNN and U-Net is used for LV detection and shape inference which makes the approach fully automatic and reduced time consumption.
  • The shape inference is achieved by using Auto-Encoder and Deep Neural Networks for Left Ventricle and Right Ventricle Wall segmentation.
  • Experimental results have shown that automatic methods are validated with ground truth using percentage of good contours, Dice Metric, APD, Conformity coefficient,
Inference : Accuracy of 98.23% is achieved with respect to ground truth and promising clinical inference is calculated in terms of Pearson’s Coefficient for EDV , ESV and EF.

  • Anupama Bhan, Parthasarathi Mangipudi "Integrated approach for fully automatic left ventricle segmentation using adaptive iteration based parametric model with deep learning in short axis cardiac MRI" Journal of Ambient Intelligence and Humanized Computing. Springer. 2021. SCI Indexed. https://doi.org/10.1007/s12652-022-04389-5 . Impact Factor 3.718.
  • Anupama Bhan, Parthasarathi Mangipudi, Ayush Goyal. "Deep Learning Approach for Automatic Segmentation and Functional Assessment of LV in Cardiac MRI" Electronics. Special Issue Medical Image Processing Using AI. MDPI. SCI-Indexed. https://doi.org/10.3390/electronics11213594Impact Factor2.64.

Title of the work: Fish Freshness and Quality Assessment like exposure to Pesticides / Heavy metals using Artificial Intelligence

Research Area: Computer Vision / Deep Learning / Machine Learning

 

M.K. Dutta

Amity Centre for Artificial Intelligence

Ashutosh Srivastava

Amity Institute of BioTechnology

Arti Srivastava

Amity Institute of BioTechnology

  • Method: Focal Tissues like Eyes and gills are segmented from the image and AI based computer vision methods are used to detect the freshness coefficient and presence of Toxic Substance.
  • Research works are done in Spatial Domain, Discrete Wavelet Transform to extract the features and the discriminatory features are fed to AI models.
  • Framework is developed to label the freshness ranges of fish and achieved an accuracy of 94%.
  • Accuracy of 96.87% is achieved for identification and detection of pesticide (Cypermethrine) contamination and differentiate between controlled and heavy metals exposed fishes.
Inference: Different supervised classification techniques are used and good accuracy is achieved in each proposed method.

  • M.Arora, Parthasarathi., M. K. Dutta, "A low-cost imaging framework for freshness evaluation from multifocal fish tissues" Journal of Food Engineering, Elsevier Publishers, DOI: //doi.org/ 10.1016/j.jfoodeng.2021.110777, 2022, Impact factor : 6.203
  • Anamika, Rakesh Joshi, M.K.Dutta, "Computer vision technique for freshness estimation from segmented eye of fish image" Ecological Informatics, Elsevier Publishers, DOI:doi.org/10.1016/j.ecoinf.2022.101602, 2022, SCI indexed Impact Factor – 4.498
  • Ashish Issac,.Ashutosh Srivastava &M.K.Dutta, "An automated computer vision based preliminary study for the identification of a heavy metal (Hg) exposed fish - Channa punctatus" Computers in Biology & Medicine, DOI: 10.1016/j.compbiomed.2019.103326, 2019, Elsevier Publishers, Impact Factor – 6.698.
  • Anushikha Singh, Ashutosh Srivastava, Rakesh Chandra Joshi &M.K.Dutta, "A Novel Pilot Study on Imaging based Identification of Fish Exposed to Heavy Metal (Hg) Contamination" Journal: Journal of Food Processing and Preservation, Wiley Publishers, Article DOI: 10.1111/jfpp.15571, 2021, SCI Indexed Impact Factor : 2.609.

Title of the work: Automatic Disease Diagnosis Using Artificial Intelligence based methods in Plants

Research Area: Deep Learning / Computer Vision / Machine Learning

 

M.K. Dutta

Amity Centre for Artificial Intelligence

Nandlal Choudhry

Amity Institute of Virology & Immunology

Ashish Srivastava

Amity Institute of Virology & Immunology

  • Computer vision & artificially intelligent methods for the automatic diagnosis of viral infection in crops like Vigna Mungo crop and other medicinal plants using Deep learning based convolutional neural network (CNN) is developed.
  • The proposed approach is non-destructive, automatic, low computing cost and fast.
  • Machine Learning and a CNN model is trained with different types of infected and healthy leaves images.
Inference: The integration of drones to the proposed technology can make the plant disease detection system more robust and large crop areas can be monitored in less time duration.

  • Rakesh Chandra Joshi, Manoj Kaushik, M,K. Dutta, Ashish Srivastava & Nandlal Choudhary, "VirLeafNet: Automatic Analysis and Viral Disease Diagnosis Using Deep-Learning in Vigna Mungo Plant" Ecological Informatics, 2020, Elsevier Publishers, doi.org/10.1016/j.ecoinf.2020.101197, SCI indexed Impact Factor – 4.498.
  • Chandrasen Pandey, Neeraj Baghel, M.K. Dutta, Ashish Srivastava & Nandlal Chaudary, "Machine Learning Approach for Automatic Diagnosis of Chlorosis in Vigna Mungo Leaves", Multimedia Tools and Applications, Multimedia Tools and Applications, Springer Nature Publishers, DOI:10.1007/s11042-020-10309-6, 2020, SCI indexed Impact Factor - 2.577.
  • Vaibhav Tiwari, Rakesh Chandra Joshi & Malay Kishore Dutta, "Deep Neural Network for Multi-class Classification of Medicinal Plant Leaves" Expert Systems, Wiley Publishers, doi.org/1111/exsy.13041, 2022, Impact factor – 2.812.

Title of the work: AI and Computer Vision based technique for identification of Acrylamide (Cancer Causing Toxic substance) in potato chips.

Research Area: Deep Learning / Computer Vision / Machine Learning

 

M.K. Dutta

Amity Centre for Artificial Intelligence

Shabari Ghoshal

Amity Institute of BioTechnology

  • Fast foods like potato chips and French fries particularly containing carbohydrate fried, when baked or roasted at the higher temperature (above 120ºC), harmful carcinogenic chemical is known as acrylamide is formed. AI based Machine Learning and Deep learning methods were used to develop methods to detect acrylamide from the images of fried carbohydrate food items like potato chips.
  • The accuracy of the developed models is more than 95% to detect acrylamide from images.
Inference: The process works in real time making it suitable for practical applications.

  • Ritesh Maurya, M,K.Dutta et.al. "Computer-Aided Automatic Detection of Acrylamide in Deep-Fried Carbohydrate-Rich Food Items using Deep Learning" Machine Vision and Applications, Springer Nature Publishers, 32, 79 (2021), doi.org/10.1007/s00138-021-01204-7, 2021, SCI Indexed Impact Factor : 2.012.
  • Monika Arora, M.Parthasarathi& Malay Kishore Dutta, "Deep learning neural networks for acrylamide identification in potato chips using transfer learning approach" Journal of Ambient Intelligence and Humanized Computing, Springer Nature Publishers, DOI: 10.1007/s12652-020-02867-2, 2021 SCI indexed, Impact Factor – 3.662
  • M.K. Dutta, Sabari Ghosal et.al. "An Imaging Technique for Acrylamide Identification in Potato Chips in Wavelet Domain" LWT- Food Science and Technology, Elsevier Publishers, Vol. 65, pp. 987-998, DOI: 10.1016/j.lwt.2015.09.035. SCI Indexed Impact Factor – 6.056.

Title of the work: An Automatic Computer Vision-based Oil Content Estimation Technique using Microscopic algae Images

Research Area: Computer Vision/ Machine Learning

 

M.K. Dutta

Amity Centre for Artificial Intelligence

Nutan Kaushik

Amity Food and Agriculture Foundation

  • A novel computer-vision based method for the automatic estimation of oil content in the microalgae from the microscopic images has been developed.
  • The extraction of oil-content particles after a rigorous statistical examination of treated cells to determine the lipid content in all three groups of microalgae.
  • The cells and oil content particles are segmented from the microscopic images of microalgae.
  • The proposed algorithm uses resolution invariant features for the estimation of the oil content.
Inference: The developed framework is computationally efficient and has a low time complexity, making it suitable for usage in real-world applications.

Rakesh Chandra Joshi, Saumya Dhup, Nutan Kaushik, Malay Kishore Dutta, "An efficient oil content estimation technique using microscopic microalgae images", Ecological Informatics, doi.org/10.1016/j.ecoinf.2021.101468, 2021, Elsevier Publishers, SCI indexed Impact Factor – 4.498.

Title of the work: Computer Vision and Artificial Intelligence based Detection of Retinal Diseases

Research Area: Deep Learning / Artificial Intelligence / Computer Vision

 

M. K. Dutta

Amity Centre for Artificial Intelligence

M.Parthasarathi

Amity School of Engineering and Technology

  • If retinal diseases detected at an early stage, appropriate solutions can be prescribed which will go a long way in reducing the levels of visual impairment.
  • Computer Aided Detection of diseases can bridge the gap and assist a medical expert at primary care centres in taking fast and accurate decisions.
  • Deep learning/Machine Learning / Computer Vision based frameworks are proposed to classify fundus images and automated diagnose of different eye diseases. The proposed AI methods are configured to extract features and fine-tuned with multiple hyper-parameters for detection of multiple retinal diseases.
Inference: Instant generation of a diagnosis report having a detailed description of clinical findings and instant correlation of results generated by an ophthalmologist using Tele Ophthalmology.

  • Neha Sengar, Rakesh Chandra Joshi, Malay Kishore Dutta & Radim Burget, "EyeDeep-Net: a multi-class diagnosis of retinal diseases using deep neural network" Neural Computing and Applications, Springer Verlag Publishers, DOI: doi.org/10.1007/s00521-023-08249-x, January 2023, Impact Factor – 5.102.
  • M.Soorya, Ashish Issac and M.K.Dutta, "Automated Framework for Screening of Glaucoma through Cloud Computing" Journal of Medical Systems, Springer Verilog Publisher – 2019, DOI: 10.1007/ s10916-019-1260-2, SCI indexed Impact Factor – 4.920.
  • Anushikha Singh, Malay Kishore Dutta, M. ParthaSarathi, Vaclav Uher & Radim Burget, "Image Processing Based Automatic Diagnosis of Glaucoma using Wavelet Features of Segmented Optic Disc from Fundus Image" Computer Methods and Programs in Biomedicine, Elsevier Publishers, February 2016Volume 124, Pages 108–120, DOI: 10.1016/j.cmpb.2015.10.010. Volume 124, February 2016, Pages 108–120, SCI Indexed Impact Factor –7.027.
  • M. ParthaSarathi, Malay Kishore Dutta, Anushikha Singh and Carlos Travieso, "Blood Vessel Inpainting based Technique for Efficient Localization and Segmentation of Optic Disc in Digital Fundus Images", Biomedical Signal Processing and Control, Volume 25, March 2016, Pages 108-117, Elsevier Publishers, DOI: 10.1016/j.bspc.2015.10.012. SCI Indexed Impact Factor - 5.076.

Title of the work: Continuous Sign Language Recognition from Wearable IMUs using Deep Capsule Networks and Game Theory

Research Area: Deep Learning / Capsule Networks/ AI-based device

 

Dr. Rinki Gupta

Amity School of Engineering and TechnologyAmity University Uttar Pradesh

Karush Suri

Student B.Tech, ECEAmity School of Engineering and TechnologyAmity University Uttar Pradesh

  • A novel 1-dimensional deep capsule network (CapsNet) architecture is proposed for continuous Indian Sign Language recognition
  • Recognition of sentences is performed using signals recorded from a custom-designed wearable IMU device.
  • Proposed CapsNet yields improved accuracy values of 94% for 3 routings and 92.50% for 5 routings in comparison with the convolutional neural network (CNN) that yields an accuracy of 87.99%.
  • Finally, a novel non-cooperative pick-and-predict competition is designed between CapsNet and CNN. Higher value of Nash equilibrium for CapsNet as compared to CNN indicates the suitability of the proposed approach.

  • Karush Suri, Rinki Gupta, "Continuous Sign Language Recognition from Wearable IMUs using Deep Capsule Networks and Game Theory", Journal of Computers and Electrical Engineering, Elsevier, vol. 78, pp 493-503, 2019, https://doi.org/10.1016/j.compeleceng.2019.08.006, SCI Indexed, Impact Factor 4.586.
  • Karush Suri, Rinki Gupta, "Convolutional Neural Network Array for Sign Language Recognition using Wearable IMUs", IEEE 6th International Conference on Signal Processing & Integrated Networks (SPIN2019), 7-8 March 2019, pp. 1-5, DOI: 10.1109/SPIN.2019.8711745.
  • Karush Suri, Rinki Gupta, "Classification of Hand Gestures from Wearable IMUs using Deep Neural Network", IEEE 2nd International Conference on Inventive Communication and Computational Technologies (ICICCT 2018), India, 20-21 Apr 2018, pp. 45-50, DOI: 10.1109/ICICCT.2018.8473301.