Research

Research Areas

The field of artificial intelligence is rapidly evolving, with researchers exploring a diverse range of cutting-edge topics and applications. Here are some of the key research areas at ACAI:

Machine Learning

Machine learning is a dynamic field empowering computers to learn from data and improve their performance. It encompasses various algorithms like supervised, unsupervised, and reinforcement learning, along with powerful techniques like deep learning and neural networks.

Research Focus:

  • Feature Engineering: Preparing data for optimal learning by extracting relevant features.
  • Model Development & Selection: Designing and choosing the best models for specific tasks.
  • Hyperparameter Tuning & Regularization: Optimizing model parameters and preventing overfitting.

Deep Learning

Deep learning, a subset of machine learning, utilizes multi-layered neural networks to tackle complex tasks. Constantly evolving, it excels in processing large volumes of unstructured data like images, audio, and text.

Research Focus:

  • Architecture Design: Crafting neural network structures for specific tasks.
  • Specialized Networks: Utilizing Convolutional Neural Networks (CNNs) for image analysis and Recurrent Neural Networks (RNNs) for sequential data like language.
  • Transfer Learning: Leveraging pre-trained models for faster training on new tasks.
  • Model Compression: Reducing model size for deployment on resource-constrained devices.
  • Beyond Classification: Exploring deep learning for tasks like unsupervised learning, and data generation using autoencoders.

Computer Vision

Computer vision enables machines to interpret and understand visual information, similar to human vision. It combines advanced algorithms and deep learning techniques for applications such as autonomous vehicles and augmented reality.

Research Focus:

  • Object Detection & Recognition: Identifying and classifying objects within images and videos.
  • Image Segmentation: Dividing images into distinct regions corresponding to objects or scenes.
  • Image Restoration & Generation: Enhancing or creating new images.
  • Scene Understanding: Extracting higher-level meaning from visual data.
  • 3D Vision & Motion Analysis: Perceiving and understanding the 3D world and analysing object movement.
  • Augmented Reality: Overlaying computer-generated information onto the real world.
  • Vision Transformers: Employing self-attention mechanisms to process images as a sequence of patches, offering a powerful alternative to convolutional neural networks for various computer vision tasks.

Natural Language Processing (NLP) & Large Language Models (LLMs)

It is a subfield of artificial intelligence (AI) and computational linguistics that deals with the interaction between computers and human languages. Its primary goal is to enable computers to understand, interpret, and generate human language in a way that is both meaningful and useful.

Research Focus:

  • Text Classification: Categorizing text data based on content.
  • Named Entity Recognition: Identifying and classifying named entities like people, places, and organizations.
  • Textual Analysis: Breaking down text into its grammatical components (part-of-speech tagging) and understanding relationships between words (dependency parsing).
  • NLP Applications: Tasks like machine translation, sentiment analysis, question answering, and text generation.
  • Large Language Models (LLMs) and Fine-tuning: Developing and refining sophisticated language models capable of generating human-quality text, translating languages, writing different kinds of creative content, and answering questions in an informative way. Fine-tuning these models on specific datasets enhances their performance for particular tasks or domains.

Generative AI (GenAI)

It is a subfield of artificial intelligence (AI) and computational linguistics that deals with the interaction between computers and human languages. Its primary goal is to enable computers to understand, interpret, and generate human language in a way that is both meaningful and useful.

Research Focus:

  • Generative Model Architectures: Developing and refining model architectures like Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer-based models to enhance generative capabilities.
  • Text Generation: Creating human-quality text, including creative writing, code generation, and translation.
  • Image Generation: Producing realistic images, art, and design elements.
  • Video Generation: Generating videos, including animations, video editing, and video synthesis.
  • Audio Generation: Creating music, speech, and sound effects.
  • Multimodal Generation: Developing models capable of generating various content formats simultaneously (e.g., text-to-image, image-to-text).

Explainable AI (XAI)

XAI focuses on building transparent and interpretable AI systems that can explain their reasoning behind decisions and actions. This fosters trust and understanding in AI.

Research Focus:

  • Model Interpretability: Making models understandable by humans.
  • Human-AI Collaboration: Designing AI systems for effective interaction with humans.
  • Explanation Methods: Developing techniques for clear and concise explanations of AI decisions.
  • Addressing Bias & Fairness: Ensuring AI systems are unbiased and fair in their outcomes.
  • Safety & Trust: Building trustworthy and reliable AI systems.
  • Policy & Regulation: Establishing regulations and policies for ethical AI development and deployment.

Time Series Analysis

Time series analysis deals with analysing data points collected over time, like stock prices or sensor readings. The goal is to understand patterns, make predictions, and identify anomalies.

Research Focus:

  • Prediction: Forecasting future values based on historical data.
  • Applications: Disease diagnosis, stock market prediction, and anomaly detection in sensor data.
  • Techniques: Time series decomposition, model building, clustering, and causality analysis.

Reinforcement Learning

Reinforcement learning (RL) focuses on training agents to make sequences of decisions by rewarding desired behaviours. It is widely used in scenarios where learning optimal actions through trial and error is essential.

Research Focus:

  • Policy Optimization: Developing methods for improving decision-making policies.
  • Multi-Agent Systems: Studying interactions and strategies in environments with multiple agents.
  • Sim-to-Real Transfer: Bridging the gap between simulation and real-world applications.
  • Hierarchical RL: Creating layered structures for complex decision-making tasks.
  • Exploration vs. Exploitation: Balancing the need to explore new strategies with exploiting known ones for rewards.

Edge AI

Edge AI refers to deploying AI algorithms directly on devices at the edge of the network, such as smartphones, IoT devices, and embedded systems. This approach reduces latency and bandwidth usage by processing data locally.

Research Focus:

  • Model Optimization: Developing lightweight models suitable for edge devices.
  • Energy Efficiency: Creating energy-efficient algorithms for prolonged device operation.
  • Privacy-Preserving AI: Ensuring data privacy by processing sensitive information locally.
  • Real-Time Processing: Enabling real-time decision-making on edge devices.
  • Federated Learning: Implementing distributed learning approaches that allow edge devices to collaboratively train models without sharing raw data.