Motivation

The purpose of this challenge is to improve and evaluate state-of-the-art results in recovering feature edges of 3D scans. While being a fundamental problem in image and shape processing and usually considered as a low-level vision problem, edge estimation in 3D scans is still far from being solved. This is particularly true for complex 3D scans with a lot of soft edges and noisy or sparse acquisitions. In such settings, recovering edges has been limited in literature due to focusing on handcrafted local surface features. Recently, some low-level geometry processing tasks such as normal estimation and denoising has been successfully addressed by employing data-driven approaches (eg. PCPNet, 2018). Motivated by these findings, some very recent works tried to recover feature edges of 3D scans using learning-based methods (eg.PIE-NET, 2020). This was possible thanks to the availability of dedicated datasets such as the ABC dataset which consists of one million Computer-Aided Design (CAD) models. However, these models are provided without the corresponding 3D scans. This dataset has been used for recovering edges by sampling point clouds on the CAD models resulting in not realistic 3D scans.

In this challenge, the very recently introduced CC3D dataset [1] is considered. The CC3D dataset contains 50k+ pairs of CAD models and their corresponding 3D scans. The CAD models were converted to 3D scans using a proprietary virtual 3D scanning pipeline developed by Artec3D. Unlike ShapeNet and ModelNet, the CC3D dataset is a collection of 3D objects unrestricted to any category, with the complexity varying from very basic to highly detailed models. The availability of CAD-3D scan pairings, the high resolution of scans, and the variability of the models make the CC3D dataset unique. Based on the CAD-3D scan pairing in the CC3D dataset, the ground-truth annotations for edges available in the CAD models can be easily transferred to the 3D scans by nearest neighbor assignment. The participants to this challenge will be given training and validation sets of 3D scans annotated with sharp edges obtained from the CAD models. For evaluation, only the 3D scans will be provided, and the task is to recover the sharp edges that approximate well the ground-truth annotations.

More information can be found here: https://gitlab.uni.lu/cvi2/eccv2020-sharp-workshop/

[1] Cherenkova, K., Aouada, D., & Gusev, G. (2020, October). Pvdeconv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D. In 2020 IEEE International Conference on Image Processing (ICIP) (pp. 2741-2745). IEEE. 10.1109/ICIP40778.2020.9191095

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.