Recovery of Feature Edges in 3D Object Scans

In this challenge, the very recently introduced CC3D dataset [1] is considered. The CC3D dataset contains 50k+ pairs of CAD models and their corresponding 3D scans. The CAD models were converted to 3D scans using a proprietary virtual 3D scanning pipeline developed by Artec3D. Unlike ShapeNet and ModelNet, the CC3D dataset is a collection of 3D objects unrestricted to any category, with the complexity varying from very basic to highly detailed models. The availability of CAD-3D scan pairings, the high resolution of scans, and the variability of the models make the CC3D dataset unique.

The participants to this challenge will be given training and validation sets of 3D scans annotated with sharp edges obtained from the CAD models. For evaluation, only the 3D scans will be provided, and the task is to recover the sharp edges that approximate well the ground-truth annotations.

More information can be found here: https://gitlab.uni.lu/cvi2/cvpr2021-sharp-workshop/

[1] Cherenkova, K., Aouada, D., & Gusev, G. (2020, October). Pvdeconv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D. In 2020 IEEE International Conference on Image Processing (ICIP) (pp. 2741-2745). IEEE. 10.1109/ICIP40778.2020.9191095

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.