Motivation

The 2nd SHApe Recovery from Partial textured 3D scans (SHARP) Workshop and Challenge will be held virtually in conjunction with CVPR on the afternoon of June 25, 2021. Research on data-driven 3D reconstruction of shapes has been very active in the past years thanks to the availability of large datasets of 3D models. However, the focus has been so far on recovering the geometry only and very little has been proposed for reconstructing full 3D objects with shape and texture at the same time. The goal of this workshop is to promote concepts that exploit both shape and texture in the processing of 3D scans in general, with a special attention to the specific task of recovery from partial data. This workshop will host a competition on the reconstruction of full high-resolution 3D meshes from partial or noisy 3D scans. It includes 3 challenges and 3 datasets. The first one involves the recovery of textured 3D human body scans. The dataset used in this scope is the 3DBodyTex.v2 dataset, containing 2500 textured 3D scans. It is an extended version of the original 3DBodyTex data (https://cvi2.uni.lu/datasets/), first published in the 2018 International Conference on 3D Computer Vision, 3DV 2018. The second challenge involves the recovery of generic object scans from the 3DObjectTex.v2 dataset, which is a subset from the ViewShape online repository of 3D scans (https://viewshape.com). This dataset contains over 2000 various generic objects with different levels of complexity in texture and geometry. The third challenge focuses on the recovery of fine object details in the form of sharp edges from noisy sparse scans with soft edges. The CC3D dataset, to be introduced at the 2020 IEEE International Conference on Image Processing (ICIP), will be used for this purpose. It contains over 50k pairs of CAD models and their corresponding 3D scans. This is the second edition of SHARP, after the first one held successfully in conjunction with ECCV 2020 (https://cvi2.uni.lu/sharp2020/).

IMPORTANT ANNOUNCEMENTS

Sponsor

Artec3D-Logo-Print-Color

Challenges

We propose three challenges. The task of the challenges is to reconstruct a full 3D textured mesh from a partial 3D scan. The first challenge is for human bodies, while the second and third challenges are for a variety of generic objects. The third challenge launches a new unique dataset:

1) Challenge 1: Recovery of Human Body Scans.

The task of this challenge is to reconstruct a full 3D textured mesh from a partial 3D human scan acquisition. 3DBodyTex.v2 is used, which consists of about 2500 clothed scans with a large diversity in clothing and in poses.

By entering Challenge 1, participants agree to be bound by these Terms and Conditions.

2) Challenge 2: Recovery of Generic Object Scans.

This challenge is focused on textured 3D scans of generic objects. It uses 3DObjectTex.v1 dataset – a subset from the ViewShape repository – containing 2000 textured 3D scans of very diverse objects.

By entering Challenge 2, participants agree to be bound by these Terms and Conditions.

3) Challenge 3: Recovery of Fine Details in 3D Objects.

This challenge is focused on recovering feature edges of 3D scans. In this challenge, the very recently introduced CC3D dataset is considered. The CC3D dataset contains 50k+ pairs of CAD models and their corresponding 3D scans.

By entering Challenge 3, participants agree to be bound by these Terms and Conditions.

IMPORTANT

An overall 9k€ will be awarded as cash prizes to the winners.

Call for Online Participation

In an effort to give more researchers the opportunity to participate in the SHARP 2021 challenges, an online participation is now open. All tracks will be available with the same deadline for registration. Instead of submitting an accompanying paper, participants will be required to submit a code. Submitting an accompanying paper, to be included in the proceedings of CVPR, is still highly encouraged.

Call for Papers

Participants in one or multiple tracks of the SHARP Challenge are expected to document their results by submitting:

– long papers (max 8 pages excluding references): when submitting completely original work, not previously published, accepted for publication, or under review in any peer-reviewed venue including journals, conferences or workshops.

– short papers (max 4 pages including references): when submitting a shared content with a paper accepted at CVPR (or any other conference, workshop or journal).

All accepted papers will be included in the CVPR 2021 conference proceedings. The papers will be peer-reviewed and they must comply with the CVPR 2021 proceedings style and format.

Important Dates

 

10th Dec 2020

1st Mar 2021

1st Apr 2021

10th Apr 2021

 

/       Paper submission website opened

/       Paper Submission Deadline

/        Final Decisions to Authors

/        Camera-Ready Submission Deadline

Organizers

DjamilaAouada1
Djamila Aouada

SnT, University of Luxembourg

djamila.aouada@uni.lu

IMG_3274_ne
Kseniya Cherenkova

Artec3D, SnT

kcherenkova@artec-group.com

Asaint
Alexandre Saint

SnT, University of Luxembourg

alexandre.saint@uni.lu

DA457009-9004-43AA-A72C-35D40E8FC8F7_1_105_c
David Fofi

University of Burgundy

david.fofi@u-bourgogne.fr

gleb_
Gleb Gusev

Artec3D

gleb@artec-group.com

Ottersten2
Bjorn Ottersten

SnT, University of Luxembourg

bjorn.ottersten@uni.lu

Anis_Kacem2
Anis Kacem

SnT, University of Luxembourg

anis.kacem@uni.lu

photo_kostas (copy)
Konstantinos Papadopoulos

SnT, University of Luxembourg

konstantinos.papadopoulos@uni.lu

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.