Recovery of Generic Object Scans
Working with a handheld 3D scanner generally allows to scan any type of objects, while the scanning artifacts usually appear due to technical infeasibility of reaching some parts of an object. The main task of this challenge is to recover a complete 3D scan of a generic object, i.e., recover missing parts in both 3D and texture, in unrestricted 3D acquisition settings.
In the scope of this challenge, we use the 3DObjectTex.v1 dataset, a subset of the ViewShape repository, which contains 2000 textured 3D scans of very diverse objects. The typical resolution of each mesh is between 10 – 100k faces/vertices, the resolution of the respective texture atlas is typically 4096×4096, which allows to capture fine details at both 3D and 2D channels.
The dataset is split into training and testing subsets randomly. We also share preprocessing scripts to randomly generate missing parts, thus producing unrestricted number of pairs of partial scans with corresponding ground truth objects Y. The goal is to recover Y from any X, both the 3D shape and texture.
Though we do not limit the participants in employing any other additional datasets or partial scan generation routines, any custom procedure has to be reported with a description and implementation among the deliverables.
The evaluation will be performed on complete shapes Y of the test set that will be shared after the submission deadline.
More information can be found here: https://gitlab.uni.lu/cvi2/cvpr2021-sharp-workshop/