Title: 3D Shape Modelling
Funding source: ARTEC 3D, AFR-PPP
Principal investigators: Prof. Björn OtterstenDr. Djamila Aouada
Researchers: Alexandre Saint, Kseniya Cherenkova, Dr. Anis Kacem
Starting date/ Duration: 15/05/2016 – 48 months

Artec3D-Logo-Print-Color
logo

Nowadays, 3D scanners are mainstream. One of their important applications is the 3D scanning of the full human body for use in industries such as healthcare, textile, fashion, game, entertainment, ergonomics, sport, fitness, anthropology, forensic analysis and security. The scan of a human body produces a point cloud ranging from thousands to millions of points. Efficient and expressive methods are necessary to handle and manipulate this large amount of data. In many practical situations, the body shape is partially or completely occluded under clothing. This makes it harder to estimate it using the existing models. Yet, it is very desirable because it provides a non-invasive mean of measuring and analysing the body. This is particularly convenient for healthcare patients, customers in (on-line) clothing shop, security application,etc. This is the focus of this research project.

The research investigates novel ways of analysing 3D scans of subjects under clothing with an emphasis on accuracy. New ways of constraining the body shape are explored: directly from the data and from other modalities. To achieve this goal, innovative mathematical models are devised. The mathematical models and tools developed are  transferable to related applications in shape modelling and computer vision. They bring fresh perspectives on solving similar problems. On top of that, the project produces new datasets of persons under clothing. By automating the registration procedure during the pre-processing of the data, the project eases the task of data preparation. The project shall strongly contribute novel perspectives in 3D shape analysis.

Publications

BODYFITR: Robust Automatic 3D Human Body Fitting
Saint, Alexandre Fabian A; Shabayek, Abd El Rahman; Cherenkova, Kseniya; Gusev, Gleb; Aouada, Djamila; Ottersten, Björn
in Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP) (2019, September 22)

3DBodyTex: Textured 3D Body Dataset
Saint, Alexandre Fabian A; Ahmed, Eman; Shabayek, Abd El Rahman; Cherenkova, Kseniya; Gusev, Gleb; Aouada, Djamila; Ottersten, Björn
in 2018 Sixth International Conference on 3D Vision (3DV 2018) (2018)

Towards Automatic Human Body Model Fitting to a 3D Scan
Saint, Alexandre Fabian A; Shabayek, Abd El Rahman; Aouada, Djamila; Ottersten, Björn; Cherenkova, Kseniya; Gusev, Gleb
in D’APUZZO, Nicola (Ed.) Proceedings of 3DBODY.TECH 2017 – 8th International Conference and Exhibition on 3D Body Scanning and Processing Technologies, Montreal QC, Canada, 11-12 Oct. 2017 (2017, October)

DEFORMATION TRANSFER OF 3D HUMAN SHAPES AND POSES ON MANIFOLDS
Shabayek, Abd El Rahman; Aouada, Djamila; Saint, Alexandre Fabian A; Ottersten, Björn
in IEEE International Conference on Image Processing, Beijing 17-20 Spetember 2017 (2017)

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.