Submit your solution to the challenge

All challenges will be available with the same deadline for registration. In order to participate to SHARP challenges, participants have to submit a short accompanying paper describing their proposed solutions. Details on the submission of the accompanying papers will be communicated soon. A working implementation of the method is optional, but encouraged.

Details on how the results can be submitted and evaluated will be communicated soon in the gitlab page: https://gitlab.uni.lu/cvi2/cvpr2021-sharp-workshop

Submit your paper

Submitted papers should be 4 to 8 pages (excluding references). They should present original work, not previously published, accepted for publication, or under review in any peer-reviewed venue including journals, conferences or workshops.

All accepted papers will be included in the CVPR 2021 conference proceedings. The papers will be peer-reviewed by experts of of the domain.

Submitted papers must follow the CVPR paper format and guidelines provided in: http://cvpr2021.thecvf.com/node/33#submission-guidelines.

Authors are advised to use the official CVPR 2021 kit which can be found in http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip.

Authors can submit their contributions including supplementary material in the submission site: https://cmt3.research.microsoft.com/SHARP2021.

Supplementary material may include multimedia (videos or images) and appendices or technical reports. All supplementary material must be self-contained in a single file for upload (e.g. .zip or .pdf file).

For any enquiries, please contact us on Shapify3D (at) uni.lu.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.