The task of this challenge is to reconstruct a full 3D textured mesh from a partial 3D human scan acquisition acquired with the Shapify booth of Artec3D,
similar in its quality to the 3DBodyTex dataset presented at 3DV’18 but collected following a different protocol. We refer to this new dataset 3DBodyTex.v2. 3DBodyTex.v2 is a new original dataset that will be shared with the research community after signing a data license agreement. It consists of about 2500 clothed scans with a large diversity in clothing and in poses. This covers 500 different subjects in up to 6 poses each, as well as an extended version of the 3DBodyTex data consisting of over 800 scans of 230 people in tight-fitting clothing, with up to 3 poses per subject.
The training data consists of pairs, (X, Y), of partial and complete scans, respectively. The goal is to recover Y from X. As part of the challenge, we share routines to generate partial scans X from the given complete scans Y.
However, the participants are free to devise their own way of generating partial data, as an augmentation or a complete alternative.
Any custom procedure should be reported with description and implementation among the deliverable.
There are two tracks with different levels of difficulty using data prepared differently as detailed below. In both tracks, the datasets are split into a training set and test set. The complete shapes (Y ) of the test set are held secret for evaluation purposes.
- Track 1: Recovery of Large Regions:
In this track, the goal consists in accurately reconstructing the clothing and the texture. The ground-truth Y are raw full-body scans contained in 3DBodyTex.v2.
A quality-check is performed to guarantee a reasonable level of defects.
The partial scans are generated synthetically.
For privacy reasons, all meshes are anonymized by blurring the shape and texture of the faces, similarly to the 3DBodyTex data.
During evaluation, the face and hands are ignored because the shape from raw scans is less reliable.
- Track 2: Recovery of Fine Details:
In this track, the goal is to reconstruct finer body details such as ears, fingers, nose. Synthetic data is generated to create accurate ground truth with the tight clothed data from 3DBodyTex.v2 being used as the textured reference. A body model (SMPL, SMPL-X) is fitted to these raw body scans. The face of the fitted model is further randomized for privacy. The texture of the scans is transferred on the fitted models to produce the ground-truth texture shapes, Y.
The scanning process of the Shapify booth is simulated in software to generate realistic complete scan acquisitions Ys. Partial scans, X, are generated from Ys using the provided script.
More information can be found here: https://gitlab.uni.lu/cvi2/eccv2020-sharp-workshop/