Organisation profile

Introduction / mission

The AIMS lab researches and develops AI models for systems equipped with sensors of multiple different modalities. We foster expertise in AI analysis of RGB, thermal, depth, LiDAR, acoustic, sonar and radar sensor data. When the multi-modal sensors are combined in a sensor suite, they often provide capabilities similar to the human ‘5-sense system’, which bring the desired full situational awareness. This awareness is vital in our industrial partners in public safety & security, smart cities, defense, critical infrastructure inspection and intelligent transportation.

Organisation profile

We conduct our research in close collaboration with the Departments of Mechanical Engineering, Mathematics & Computer Science and Industrial Engineering & Industrial Sciences at TU/e. Externally, we work together with research institutions such as Reality Labs at Meta, MARIN, Inria, TNO, SIRRIS, as well as with the Universities of Munich, Delft, Maastricht, Liege, Birmingham and Gent.

Multi-sense Perception for Situational Awareness

Combination of sensors of different modalities enables robust perception and high utility in many application areas.  A system able to analyze sound sources, detect events at night and day, in rain/fog conditions, and localize objects of interest in 3D space is a dream for owners of critical infrastructure, transportation systems, defense and public safety systems.

The objective of the AIMS lab is to explore and learn how the multi-modal data can be processed and fused together by AI technologies to enable situational awareness in real-time. For this, the lab pushes the frontiers in unsupervised machine learning, Video Language Models (VLM), 3D scene reconstruction, anomaly analysis and edge AI.  Our grand challenges in multi-modal sensor fusion are:


a) automation in spatio-temporal registration of different modality data;
b) distillation and fusion of relevant data from multiple sensor types;
c) detection of anomalies without training data on such anomalies;
d) holistic AI analysis of 3D area as a whole, instead of individual image/ signal analysis;
e) enabling explainability in AI models.

UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. Our work contributes towards the following SDG(s):

  1. SDG 7 - Affordable and Clean Energy
    SDG 7 Affordable and Clean Energy
  2. SDG 11 - Sustainable Cities and Communities
    SDG 11 Sustainable Cities and Communities
  3. SDG 16 - Peace, Justice and Strong Institutions
    SDG 16 Peace, Justice and Strong Institutions

Fingerprint

Dive into the research topics where AI Multi-modal Sensing is active. These topic labels come from the works of this organisation's members. Together they form a unique fingerprint.

Collaborations and top research areas from the last five years

Recent external collaboration on country/territory level. Dive into details by clicking on the dots or