Important announcements

Workshop and Challenge will be held virtually in conjunction with CVPR on June 25, 2021 (4:00 p.m. – 8:30 p.m. CET)

We will be live streaming the workshop on YouTube.

Please meet us here at 10:00 EDT (16:00 CET)

The paper submission deadline is extended to March 21.
The registration deadline to the challenges is approaching. Registrations to all the challenges will remain open until February 22 March 1 (Extended).

The 2nd SHApe Recovery from Partial textured 3D scans (SHARP) Workshop and Challenge will be held virtually in conjunction with CVPR on June 25, 2021 (4:00 p.m. – 8:30 p.m. CET). Research on data-driven 3D reconstruction of shapes has been very active in the past years thanks to the availability of large datasets of 3D models. However, the focus has been so far on recovering the geometry only and very little has been proposed for reconstructing full 3D objects with shape and texture at the same time.

The goal of this workshop is to promote concepts that exploit both shape and texture in the processing of 3D scans in general, with a special attention to the specific task of recovery from partial and noisy data.

This workshop will host a paper submission track and a competition:

  • The paper submission track encourages participants to submit novel contributions on data-driven shape and texture processing. A call for papers specifying the topics of interest can be found below.
  • The competition focuses on the reconstruction of full high-resolution 3D meshes from partial or noisy 3D scans and includes 3 challenges and 3 datasets:
    • The first one involves the recovery of textured 3D human body scans. The dataset used in this scope is the 3DBodyTex.v2 dataset, containing 2500 textured 3D scans. It is an extended version of the original 3DBodyTex data, first published in the 2018 International Conference on 3D Computer Vision, 3DV 2018.
    • The second challenge involves the recovery of generic object scans from the 3DObjectTex.v2 dataset, which is a subset from the ViewShape online repository of 3D scans. This dataset contains over 2000 various generic objects with different levels of complexity in texture and geometry.
    • The third challenge focuses on the recovery of fine object details in the form of sharp edges from noisy sparse scans with soft edges. The CC3D dataset, introduced at the 2020 IEEE International Conference on Image Processing (ICIP), will be used for this purpose. It contains over 50k pairs of CAD models and their corresponding 3D scans.

This is the second edition of SHARP, after the first one held successfully in conjunction with ECCV 2020.

Sponsor

Call for Participation (Challenges)

We propose three challenges. The task of the challenges is to reconstruct a full 3D textured mesh from a partial 3D scan. The first challenge is for human bodies, while the second and third challenges are for a variety of generic objects. The third challenge launches a new unique dataset.

🏆 An overall 9k€ will be awarded as cash prizes to the winners.

Challenge 1: Recovery of Human Body Scans

CHALLENGE 1

The task of this challenge is to reconstruct a full 3D textured mesh from a partial 3D human scan acquisition. 3DBodyTex.v2 is used, which consists of about 2500 clothed scans with a large diversity in clothing and in poses.

By entering Challenge 1, participants agree to be bound by these Terms and Conditions.

Challenge 2: Recovery of Generic Object Scans

CHALLENGE 2

This challenge is focused on textured 3D scans of generic objects. It uses 3DObjectTex.v1 dataset – a subset from the ViewShape repository – containing 2000 textured 3D scans of very diverse objects.

By entering Challenge 2, participants agree to be bound by these Terms and Conditions.

Challenge 3: Recovery of Feature Edges in 3D Object Scans

CHALLENGE 3

This challenge is focused on recovering feature edges of 3D scans. Here, the very recently introduced CC3D dataset is considered. The CC3D dataset contains 50k+ pairs of CAD models and their corresponding 3D scans.

By entering Challenge 3, participants agree to be bound by these Terms and Conditions.

All challenges will be available with the same deadline for registration. In order to participate to SHARP challenges, participants have to submit a short accompanying paper describing their proposed solutions. Details on the submission of the accompanying papers will be communicated soon. A working implementation of the method is optional, but encouraged.

Call for Papers (Paper Submission Track)

The main focus of SHARP is to encourage paper submissions on high-resolution 3D shape and texture recovery from partial data, especially as accompanying papers to the challenge submissions. In addition, all topics that relate to and serve the goal of data-driven shape and texture processing are of interest. This includes original contributions at different levels of data processing; for different industrial applications, as well as proposals for new evaluation metrics and relevant original datasets. Topics of interest include, but are not limited to:

  • Textured 3D data representation and evaluation
  • Textured 3D scan feature extraction
  • Generative modelling of textured 3D scans
  • Learning-based 3D reconstruction
  • Joint texture and shape matching
  • Joint texture and shape completion
  • Semantic 3D data reconstruction
  • Effective 3D and 2D data fusion
  • Textured 3D data refinement
  • 3D feature edge detection and refinement
  • High-level representations of 3D data
  • CAD modeling from unstructured 3D data

Authors are encouraged to submit their contributions to the SHARP 2021 submission site. All accepted papers will be included in the CVPR 2021 conference proceedings. The papers will be peer-reviewed and they must comply with the CVPR 2021 proceedings style and format. More details about the submission formats can be found in the submission page.

Important Dates

Organizers

DjamilaAouada1
Djamila Aouada

Chair

SnT, University of Luxembourg

djamila.aouada@uni.lu

IMG_3274_ne
Kseniya Cherenkova

Co-Chair

Artec3D, SnT

kcherenkova@artec-group.com

Asaint
Alexandre Saint

SnT, University of Luxembourg

alexandre.saint@uni.lu

DA457009-9004-43AA-A72C-35D40E8FC8F7_1_105_c
David Fofi

University of Burgundy

david.fofi@u-bourgogne.fr

gleb_
Gleb Gusev

Artec3D

gleb@artec-group.com

Ottersten2
Bjorn Ottersten

SnT, University of Luxembourg

bjorn.ottersten@uni.lu

Anis_Kacem2
Anis Kacem

SnT, University of Luxembourg

anis.kacem@uni.lu

photo_kostas (copy)
Konstantinos Papadopoulos

SnT, University of Luxembourg

konstantinos.papadopoulos@uni.lu

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.

TEST

A dataset containing 400 real, high-resolution human scans of 200 subjects (100 males and 100 females in two poses each) with high-quality texture and their corresponding low-resolution meshes, with automatically computed ground-truth correspondences. See the following table.