DPF-Net: Combining Explicit Shape Priors in Deformable Primitive Field
for Unsupervised Structural Reconstruction of 3D Objects
- URL: http://arxiv.org/abs/2308.13225v1
- Date: Fri, 25 Aug 2023 07:50:59 GMT
- Title: DPF-Net: Combining Explicit Shape Priors in Deformable Primitive Field
for Unsupervised Structural Reconstruction of 3D Objects
- Authors: Qingyao Shuai, Chi Zhang, Kaizhi Yang, Xuejin Chen
- Abstract summary: We present a novel unsupervised structural reconstruction method, named DPF-Net, based on a new Deformable Primitive Field representation.
The strong shape prior encoded in parameterized geometric primitives enables our DPF-Net to extract high-level structures and recover fine-grained shape details consistently.
- Score: 12.713770164154461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised methods for reconstructing structures face significant
challenges in capturing the geometric details with consistent structures among
diverse shapes of the same category. To address this issue, we present a novel
unsupervised structural reconstruction method, named DPF-Net, based on a new
Deformable Primitive Field (DPF) representation, which allows for high-quality
shape reconstruction using parameterized geometric primitives. We design a
two-stage shape reconstruction pipeline which consists of a primitive
generation module and a primitive deformation module to approximate the target
shape of each part progressively. The primitive generation module estimates the
explicit orientation, position, and size parameters of parameterized geometric
primitives, while the primitive deformation module predicts a dense deformation
field based on a parameterized primitive field to recover shape details. The
strong shape prior encoded in parameterized geometric primitives enables our
DPF-Net to extract high-level structures and recover fine-grained shape details
consistently. The experimental results on three categories of objects in
diverse shapes demonstrate the effectiveness and generalization ability of our
DPF-Net on structural reconstruction and shape segmentation.
Related papers
- Parameterize Structure with Differentiable Template for 3D Shape Generation [39.414253821696846]
Recent 3D shape generation works employ complicated networks and structure definitions.
We propose a method that parameterizes the shared structure in the same category using a differentiable template.
Our method can reconstruct or generate diverse shapes with complicated details, and interpolate them smoothly.
arXiv Detail & Related papers (2024-10-14T11:43:02Z) - StructRe: Rewriting for Structured Shape Modeling [63.792684115318906]
We present StructRe, a structure rewriting system, as a novel approach to structured shape modeling.
Given a 3D object represented by points and components, StructRe can rewrite it upward into more concise structures, or downward into more detailed structures.
arXiv Detail & Related papers (2023-11-29T10:35:00Z) - DeFormer: Integrating Transformers with Deformable Models for 3D Shape
Abstraction from a Single Image [31.154786931081087]
We propose a novel bi-channel Transformer architecture, integrated with parameterized deformable models, to simultaneously estimate the global and local deformations of primitives.
DeFormer achieves better reconstruction accuracy over the state-of-the-art, and visualizes with consistent semantic correspondences for improved interpretability.
arXiv Detail & Related papers (2023-09-22T02:46:43Z) - DTF-Net: Category-Level Pose Estimation and Shape Reconstruction via
Deformable Template Field [29.42222066097076]
Estimating 6D poses and reconstructing 3D shapes of objects in open-world scenes from RGB-depth image pairs is challenging.
We propose the DTF-Net, a novel framework for pose estimation and shape reconstruction based on implicit neural fields of object categories.
arXiv Detail & Related papers (2023-08-04T10:35:40Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.