PartSDF: Part-Based Implicit Neural Representation for Composite 3D Shape Parametrization and Optimization
- URL: http://arxiv.org/abs/2502.12985v1
- Date: Tue, 18 Feb 2025 16:08:47 GMT
- Title: PartSDF: Part-Based Implicit Neural Representation for Composite 3D Shape Parametrization and Optimization
- Authors: Nicolas Talabot, Olivier Clerc, Arda Cinar Demirtas, Doruk Oner, Pascal Fua,
- Abstract summary: PartSDF is a supervised implicit representation framework that explicitly models composite shapes with independent, controllable parts.<n>PartSDF outperforms both supervised and unsupervised baselines in reconstruction and generation tasks.
- Score: 38.822156749041206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate 3D shape representation is essential in engineering applications such as design, optimization, and simulation. In practice, engineering workflows require structured, part-aware representations, as objects are inherently designed as assemblies of distinct components. However, most existing methods either model shapes holistically or decompose them without predefined part structures, limiting their applicability in real-world design tasks. We propose PartSDF, a supervised implicit representation framework that explicitly models composite shapes with independent, controllable parts while maintaining shape consistency. Despite its simple single-decoder architecture, PartSDF outperforms both supervised and unsupervised baselines in reconstruction and generation tasks. We further demonstrate its effectiveness as a structured shape prior for engineering applications, enabling precise control over individual components while preserving overall coherence. Code available at https://github.com/cvlab-epfl/PartSDF.
Related papers
- OmniPart: Part-Aware 3D Generation with Semantic Decoupling and Structural Cohesion [31.767548415448957]
We introduce OmniPart, a novel framework for part-aware 3D object generation.<n>Our approach supports user-defined part granularity, precise localization, and enables diverse downstream applications.
arXiv Detail & Related papers (2025-07-08T16:46:15Z) - PRISM: Probabilistic Representation for Integrated Shape Modeling and Generation [79.46526296655776]
PRISM is a novel approach for 3D shape generation that integrates categorical diffusion models with Statistical Shape Models (SSM) and Gaussian Mixture Models (GMM)
Our method employs compositional SSMs to capture part-level geometric variations and uses GMM to represent part semantics in a continuous space.
Our approach significantly outperforms previous methods in both quality and controllability of part-level operations.
arXiv Detail & Related papers (2025-04-06T11:48:08Z) - Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - DAE-Net: Deforming Auto-Encoder for fine-grained shape co-segmentation [22.538892330541582]
We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection.
To accommodate structural variations in the collection, our network composes each shape by a selected subset of template parts which are affine-transformed.
Our network, coined DAE-Net for Deforming Auto-Encoder, can achieve unsupervised 3D shape co-segmentation that yields fine-grained, compact, and meaningful parts.
arXiv Detail & Related papers (2023-11-22T03:26:07Z) - Attention-based Part Assembly for 3D Volumetric Shape Modeling [0.0]
We propose a VoxAttention network architecture for attention-based part assembly.
Experimental results show that our method outperforms most state-of-the-art methods for the part relation-aware 3D shape modeling task.
arXiv Detail & Related papers (2023-04-17T16:53:27Z) - SPAMs: Structured Implicit Parametric Models [30.19414242608965]
We learn Structured-implicit PArametric Models (SPAMs) as a deformable object representation that structurally decomposes non-rigid object motion into part-based disentangled representations of shape and pose.
Experiments demonstrate that our part-aware shape and pose understanding lead to state-of-the-art performance in reconstruction and tracking of depth sequences of complex deforming object motion.
arXiv Detail & Related papers (2022-01-20T12:33:46Z) - LSD-StructureNet: Modeling Levels of Structural Detail in 3D Part
Hierarchies [5.173975064973631]
We introduce LSD-StructureNet, an augmentation to the StructureNet architecture that enables re-generation of parts.
We evaluate LSD-StructureNet on the PartNet dataset, the largest dataset of 3D shapes represented by hierarchies of parts.
arXiv Detail & Related papers (2021-08-18T15:05:06Z) - 3D Reconstruction of Novel Object Shapes from Single Images [23.016517962380323]
We show that our proposed SDFNet achieves state-of-the-art performance on seen and unseen shapes.
We provide the first large-scale evaluation of single image shape reconstruction to unseen objects.
arXiv Detail & Related papers (2020-06-14T00:34:26Z) - UCLID-Net: Single View Reconstruction in Object Space [60.046383053211215]
We show that building a geometry preserving 3-dimensional latent space helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space.
We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones.
arXiv Detail & Related papers (2020-06-06T09:15:56Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - Unsupervised Learning of Intrinsic Structural Representation Points [50.92621061405056]
Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing.
We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points.
arXiv Detail & Related papers (2020-03-03T17:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.