PICASSO: A Feed-Forward Framework for Parametric Inference of CAD Sketches via Rendering Self-Supervision

1SnT, University of Luxembourg 2Artec3D
Embedded image

PICASSO is a framework that enables CAD sketch parameterization via rendering self-supervision.

Introduction

In this work, we introduce a framework for Parametric Inference of CAD Sketches via Rendering Self-SupervisiOn, refereed to as PICASSO. PICASSO enables learning parametric CAD sketches directly from precise or hand-drawn images, even when parameter-level annotations are limited or unavailable. This is achieved by utilizing the geometric appearance of sketches as a learning signal to pretrain a CAD parameterization network.

PICASSO is composed of two main components: (1) a Sketch Parameterization Network (SPN) that predicts a set of parametric primitives from CAD sketch images and (2) a Sketch Rendering Network (SRN) to render parametric CAD sketches in a differentiable manner. SRN enables image-to-image loss computation that can be used to pretrain SPN, leading to zero- and few-shot learning scenarios for hand-drawn sketch parametrization. To the best of our knowledge, we are the first to address CAD sketch parameterization with limited or without parametric annotations. PICASSO can achieve strong parameterization performance with only a small number of annotated samples.

Method

Sketch Rendering Network is modeled by a transformer encoder-decoder that learns a mapping from parametric primitive tokens to the sketch image domain. Through neural differentiable rendering, SRN allows the computation of an image-to-image loss between predicted raster sketches and input precise or hand-drawn sketches. The proposed SRN can be trained on synthetically generated sketches.

Embedded image

Architecture of the Sketch Rendering Network (SRN).

Sketch Parameterization Network processes an input raster sketch image using a convolutional backbone to produce a feature map, which is then fed to a transformer encoder-decoder for sketch parameterization. SPN is pre-trained using rendering self-supervision provided by SRN, allowing zero-shot CAD sketch parameterization, and finetuned with parameter-level annotations for few-shot scenario.

Embedded image

Architecture of the Sketch Parameterization Network (SPN).

Effectiveness of PICASSO

Few-shot CAD Parameterization

For the few-shot setting, we first pre-train PICASSO with rendering self-supervision on CAD sketch images from the SketchGraphs dataset. Pre-trained model is subsequently finetuned parameterically on smaller sets of sketches. The pre-trained PICASSO (w/ pt.) is compared to its from scratch counterpart (w/o pt.). Overall, the pre-training outperforms learning from scratch across different sizes of finetuning datasets both with precise and hand-drawn images.

Embedded image

Effectiveness of rendering self-supervision on a fewshot evaluation.

Embedded image

Qualitative few-shot results of PICASSO learned CAD sketch parameterization.

Zero-shot CAD Parameterization

By leveraging rendering self-supervision, PICASSO can estimate the parameters of sketches directly without requiring parametric supervision. We evaluate the performance of PICASSO on the challenging zero-shot CAD sketch parameterization scenario.

Embedded image

Zero-shot parameterization results of PICASSO.

Test-time Optimization

We compare the proposed SRN to the differentiable rendered DiffVG on a test-time optimization setting. In particular, rendering self-supervision by both SRN and DiffVG is used to enhance CAD parameterization produced by a parameterically supervised SPN at test-time. SRN surpasses DiffVG-based test-time optimization.

Embedded image

Test-time optimization results of PICASSO.

Acknowledgements

The present work is supported by the National Research Fund (FNR), Luxembourg under the BRIDGES2021/IS/16849599/FREE-3D project and Artec3D.

Embedded image

BibTeX

@misc{karadeniz2024picasso,
                title={PICASSO: A Feed-Forward Framework for Parametric Inference of CAD Sketches via Rendering Self-Supervision}, 
                author={Ahmet Serdar Karadeniz and Dimitrios Mallis and Nesryne Mejri and Kseniya Cherenkova and Anis Kacem and Djamila Aouada},
                year={2024},
                eprint={2407.13394},
                archivePrefix={arXiv},
                primaryClass={cs.CV},
                url={https://arxiv.org/abs/2407.13394}
            }