Motivation

Two submission options are possible:

– Option1: Paper (required: a paper of at least 6 pages describing the method; optional: working source code)

– Option2: Code (required: a working implementation of the method as source code; optional: accompanying paper of at least 4 pages).

Submitted papers must follow the CVPR paper format and guidelines provided in: http://cvpr2021.thecvf.com/node/33#submission-guidelines.

Authors are advised to use the official CVPR 2021 kit which can be found in http://cvpr2021.thecvf.com/sites/default/files/2020-09/cvpr2021AuthorKit_2.zip.

Authors can submit their contributions including supplementary material in the submission site: https://cmt3.research.microsoft.com/CVPR2021 .

Supplementary material may include multimedia (videos or images) and appendices or technical reports. All supplementary material must be self-contained in a single file for upload (e.g. .zip or .pdf file).

For any enquiries, please contact us on Shapify3D (at) uni.lu

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.