Sharp 2023

We invite paper submissions to the 4th edition of the SHARP (Solving CAD History and pArameters Recovery from Point clouds and 3D scans) Workshop in conjunction with ICCV 2023 to be held in Paris on October 3rd, 2023. Designed to foster the development of 3D modelling and processing techniques that exploit the geometric, language-based, and parametric representations of 3D scanned data, the SHARP Workshop will solicit new methods for recovering the history and parameters of Computer-Aided Design (CAD) models from 3D scanned data. In addition to scan-to-CAD problems, all topics that relate to and serve the goal of data-driven models for 3D shape parametrization, language-based 3D shape modeling/generation, and 3D shape procedural modeling are of interest. Topics of interest include, but are not limited to:

  • CAD history inference from 3D scans
  • CAD generation/segmentation
  • CAD sketch detection from 3D scans
  • Security features on CAD models to protect designer’s intellectual property
  • Parametric inference of geometric primitives on 3D shapes
  • 3D shape parsing and instance segmentation
  • Language-based 3D shape modeling/generation
  • Edge detection and segmentation in 3D shapes
  • Building information models from 3D scanned data


📢📢📢The paper decisions for SHARP 2023 workshop are out! Congratulations to the authors for their accepted papers!

The paper submission deadline is extended to the 26th of July, 2023!

Get more Information!


Author Guidelines

Important Dates!

Paper Submission Guidelines:

  • Submitted papers should be 4 to 8 pages (excluding references). They should present original work, not previously published, accepted for publication, or under review in any peer-reviewed venue including journals, conferences or workshops.
  • All accepted papers will be included in the ICCV 2023 conference proceedings. The papers will be peer-reviewed by experts of of the domain.
  • Submitted papers must follow the ICCV paper format and guidelines. Authors are advised to use the official ICCV 2023 kit which can be found in
  • Authors can submit their contributions including supplementary material in the submission site
  • Supplementary material may include multimedia (videos or images) and appendices or technical reports. All supplementary material must be self-contained in a single file for upload (e.g. .zip or .pdf file).
  • For any enquiries, please contact us on Shapify3D (at)
  • Program

    Plenary Speakers:

    Prof. Daniel Ritchie

    Short Bio: Daniel Ritchie is the Eliot Horowitz Assistant Professor of Computer Science at Brown University, where he co-leads the Brown Visual Computing group. His research sits at the intersection of computer graphics, artificial intelligence, and machine learning: he build intelligent machines that understand the visual world and help people be visually creative. Much of his group’s current work focuses on analyzing, synthesizing, and manipulating 3D scenes and the 3D objects that comprise them.

    Abstract: Programs which generate 3D shapes have a variety of applications, from procedural modeling for games and animation to computer-aided design for manufacturing. The ability to infer a generative program for a given 3D shape would be valuable, enabling reverse-engineering and procedural editing of 3D objects found in the wild. Nowadays, machine learning approaches are the dominant paradigm for solving such inference problems. Unfortunately, training data in the form of (3D shape, generative program) pairs rarely exists, making supervised learning challenging or impossible. In this talk, I’ll discuss unsupervised approaches for solving the 3D shape program inference problem, i.e. methods that assume no access to such paired training data.

    Lourdes Agapito
    Prof. Lourdes Agapito

    Short Bio: Prof. Lourdes Agapito is a Professor of 3D Vision at the Department of Computer Science, University College London (UCL). Her research in Computer Vision has consistently focused on the inference of 3D information from single images or videos acquired from a moving camera. She serves regularly as Area Chair for the top Computer Vision conferences (CVPR, ICCV, ECCV). Her perspective on the next steps in 3D data parsing and representation learning will motivate interesting discussions amongst the SHARP community.

    Abstract: TBA


    Time (CET) Talks Speakers
    08:30 – 08:40 Welcome and Introduction Djamila Aouada
    08:40 – 09:10 SHARP Challenge 2023: Overview, Datasets, Metrics, and Baselines Djamila Aouada, Anis Kacem, Dimitrios Mallis, Elona Dupont
    SnT, University of Luxembourg
    09:10 – 09:25 Oral Presentation #1: Building CAD Model Reconstruction from Point Clouds via Instance Segmentation, Signed Distance Function, and Graph Cut Takayuki Shinohara, PASCO
    09:25 – 09:40 Oral Presentation #2: 2D Cross-View Object Segmentation and Perceptual Grouping in Computer-Aided Design Drawings Mohamed Dhia Elhak BESBES, Capgemini
    09:40 – 10:25 Invited Talk #1: TBD Lourdes Agapito, University College London
    Coffe Break / Poster Session
    10:45 – 11:30 Invited Talk #2: Unsupervised approaches for solving 3D shape program inference Daniel Ritchie, Brown University
    11:30 – 11:45 Oral Presentation #3: Rotation-invariant Hierarchical Segmentation on Poincaré Ball for 3D Point Cloud Pierre ONGHENA, Mines ParisTech
    11:45 – 12:00 Oral Presentation #4: APNet: Urban-level Scene Segmentation of Aerial Images and Point Clouds Weijie Wei, University of Amsterdam
    12:00 – 12:15 Oral Presentation #5: Fine-Tuned but Zero-Shot 3D Shape Sketch View Similarity and Retrieval Gianluca Berardi, University of Bologna
    12:15 – 12:25 Video Presentation of the Sponsor Artec3D, Luxembourg
    Closing Remarks


    Djamila Aouada (Chair)

    SnT, University of Luxembourg

    Kseniya Cherenkova

    Artec3D, SnT

    Yulia Gryaditskaya

    Surrey Institute for People Centred AI and CVSSP, University of Surrey,

    Sk Aziz Ali

    SnT, University of Luxembourg

    Brojeshwar Bhowmik

    TCS Research

    Joseph Lambourne

    Autodesk AI Lab

    Anis Kacem

    SnT, University of Luxembourg

    Gleb Gusev