CVI²: Computer Vision, Imaging and Machine Intelligence Research Group

Mission:

The Computer Vision, Imaging & Machine Intelligence Research Group (CVI²) at the Interdisciplinary Centre for Security, Reliability and Trust (SnT) of the University of Luxembourg (UL) is headed by Dr. Djamila Aouada. The group carries out research on computer vision, image processing, image analysis, visual data understanding, and machine learning. A strong focus is given to developing state-of-the-art and innovative algorithms for various tasks such as data enhancement, data fusion, filtering, registration, estimation, detection, classification, recognition, segmentation, and deformation, with an extensive use and development of deep learning approaches. Data of interest cover different modalities from 2D, RGB-D, infrared, Lidar, to dense 3D. Past and current specific research topics are on human body modelling including shape and pose; optimization and compression of very deep neural networks for image classification; cost-sensitive classification for fraud detection; 3D motion analysis for action recognition and action detection; multi-sensor fusion for robust sensing; face modelling for expression recognition and person identification. These imaging and vision activities are supported by the SnT Computer Vision Lab – a dedicated well-equipped laboratory located in Maison du Nombre (MNO), Campus Belval. In addition, large scale data collection campaigns are regularly organized by the team for the needs of specific projects.

All research activities are driven by real-world applications with a focus on one of the following domains: (1) Computer Aided Design (CAD), (2) Healthcare, (3) Satellite and space, (4) Surveillance and security, (5) Services, (6) Education and public outreach.

This website is dedicated to announcing the CVI² group’s research activities, news, openings and for sharing collected datasets. Please keep track of our datasets to request them.

Yours,

CVI² group

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.