Parameterize Structure with Differentiable Template for 3D Shape Generation
- URL: http://arxiv.org/abs/2410.10399v2
- Date: Tue, 15 Oct 2024 06:42:36 GMT
- Title: Parameterize Structure with Differentiable Template for 3D Shape Generation
- Authors: Changfeng Ma, Pengxiao Guo, Shuangyu Yang, Yinuo Chen, Jie Guo, Chongjun Wang, Yanwen Guo, Wenping Wang,
- Abstract summary: Recent 3D shape generation works employ complicated networks and structure definitions.
We propose a method that parameterizes the shared structure in the same category using a differentiable template.
Our method can reconstruct or generate diverse shapes with complicated details, and interpolate them smoothly.
- Score: 39.414253821696846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structural representation is crucial for reconstructing and generating editable 3D shapes with part semantics. Recent 3D shape generation works employ complicated networks and structure definitions relying on hierarchical annotations and pay less attention to the details inside parts. In this paper, we propose the method that parameterizes the shared structure in the same category using a differentiable template and corresponding fixed-length parameters. Specific parameters are fed into the template to calculate cuboids that indicate a concrete shape. We utilize the boundaries of three-view drawings of each cuboid to further describe the inside details. Shapes are represented with the parameters and three-view details inside cuboids, from which the SDF can be calculated to recover the object. Benefiting from our fixed-length parameters and three-view details, our networks for reconstruction and generation are simple and effective to learn the latent space. Our method can reconstruct or generate diverse shapes with complicated details, and interpolate them smoothly. Extensive evaluations demonstrate the superiority of our method on reconstruction from point cloud, generation, and interpolation.
Related papers
- Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - StructRe: Rewriting for Structured Shape Modeling [63.792684115318906]
We present StructRe, a structure rewriting system, as a novel approach to structured shape modeling.
Given a 3D object represented by points and components, StructRe can rewrite it upward into more concise structures, or downward into more detailed structures.
arXiv Detail & Related papers (2023-11-29T10:35:00Z) - DPF-Net: Combining Explicit Shape Priors in Deformable Primitive Field
for Unsupervised Structural Reconstruction of 3D Objects [12.713770164154461]
We present a novel unsupervised structural reconstruction method, named DPF-Net, based on a new Deformable Primitive Field representation.
The strong shape prior encoded in parameterized geometric primitives enables our DPF-Net to extract high-level structures and recover fine-grained shape details consistently.
arXiv Detail & Related papers (2023-08-25T07:50:59Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - 3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow [61.62796058294777]
Reconstructing 3D shape from a single 2D image is a challenging task.
Most of the previous methods still struggle to extract semantic attributes for 3D reconstruction task.
We propose 3DAttriFlow to disentangle and extract semantic attributes through different semantic levels in the input images.
arXiv Detail & Related papers (2022-03-29T02:03:31Z) - LSD-StructureNet: Modeling Levels of Structural Detail in 3D Part
Hierarchies [5.173975064973631]
We introduce LSD-StructureNet, an augmentation to the StructureNet architecture that enables re-generation of parts.
We evaluate LSD-StructureNet on the PartNet dataset, the largest dataset of 3D shapes represented by hierarchies of parts.
arXiv Detail & Related papers (2021-08-18T15:05:06Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z) - Unsupervised Learning of Intrinsic Structural Representation Points [50.92621061405056]
Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing.
We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points.
arXiv Detail & Related papers (2020-03-03T17:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.