Attention-based Part Assembly for 3D Volumetric Shape Modeling
- URL: http://arxiv.org/abs/2304.10986v1
- Date: Mon, 17 Apr 2023 16:53:27 GMT
- Title: Attention-based Part Assembly for 3D Volumetric Shape Modeling
- Authors: Chengzhi Wu, Junwei Zheng, Julius Pfrommer, J\"urgen Beyerer
- Abstract summary: We propose a VoxAttention network architecture for attention-based part assembly.
Experimental results show that our method outperforms most state-of-the-art methods for the part relation-aware 3D shape modeling task.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling a 3D volumetric shape as an assembly of decomposed shape parts is
much more challenging, but semantically more valuable than direct
reconstruction from a full shape representation. The neural network needs to
implicitly learn part relations coherently, which is typically performed by
dedicated network layers that can generate transformation matrices for each
part. In this paper, we propose a VoxAttention network architecture for
attention-based part assembly. We further propose a variant of using
channel-wise part attention and show the advantages of this approach.
Experimental results show that our method outperforms most state-of-the-art
methods for the part relation-aware 3D shape modeling task.
Related papers
- Part123: Part-aware 3D Reconstruction from a Single-view Image [54.589723979757515]
Part123 is a novel framework for part-aware 3D reconstruction from a single-view image.
We introduce contrastive learning into a neural rendering framework to learn a part-aware feature space.
A clustering-based algorithm is also developed to automatically derive 3D part segmentation results from the reconstructed models.
arXiv Detail & Related papers (2024-05-27T07:10:21Z) - DAE-Net: Deforming Auto-Encoder for fine-grained shape co-segmentation [22.538892330541582]
We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection.
To accommodate structural variations in the collection, our network composes each shape by a selected subset of template parts which are affine-transformed.
Our network, coined DAE-Net for Deforming Auto-Encoder, can achieve unsupervised 3D shape co-segmentation that yields fine-grained, compact, and meaningful parts.
arXiv Detail & Related papers (2023-11-22T03:26:07Z) - ANISE: Assembly-based Neural Implicit Surface rEconstruction [12.745433575962842]
We present ANISE, a method that reconstructs a 3Dshape from partial observations (images or sparse point clouds)
The shape is formulated as an assembly of neural implicit functions, each representing a different part instance.
We demonstrate that, when performing reconstruction by decoding part representations into implicit functions, our method achieves state-of-the-art part-aware reconstruction results from both images and sparse point clouds.
arXiv Detail & Related papers (2022-05-27T00:01:40Z) - SurFit: Learning to Fit Surfaces Improves Few Shot Learning on Point
Clouds [48.61222927399794]
SurFit is a simple approach for label efficient learning of 3D shape segmentation networks.
It is based on a self-supervised task of decomposing the surface of a 3D shape into geometric primitives.
arXiv Detail & Related papers (2021-12-27T23:55:36Z) - The Shape Part Slot Machine: Contact-based Reasoning for Generating 3D
Shapes from Parts [33.924785333723115]
We present a new method for assembling novel 3D shapes from existing parts by performing contact-based reasoning.
Our method represents each shape as a graph of "slots," where each slot is a region of contact between two shape parts.
We demonstrate that our method generates shapes that outperform existing modeling-by-assembly approaches in terms of quality, diversity, and structural complexity.
arXiv Detail & Related papers (2021-12-01T15:54:54Z) - Discovering 3D Parts from Image Collections [98.16987919686709]
We tackle the problem of 3D part discovery from only 2D image collections.
Instead of relying on manually annotated parts for supervision, we propose a self-supervised approach.
Our key insight is to learn a novel part shape prior that allows each part to fit an object shape faithfully while constrained to have simple geometry.
arXiv Detail & Related papers (2021-07-28T20:29:16Z) - Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible
Neural Networks [118.20778308823779]
We present a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN)
Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision.
arXiv Detail & Related papers (2021-03-18T17:59:31Z) - Generative 3D Part Assembly via Dynamic Graph Learning [34.108515032411695]
Part assembly is a challenging yet crucial task in 3D computer vision and robotics.
We propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone.
arXiv Detail & Related papers (2020-06-14T04:26:42Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z) - Learning 3D Human Shape and Pose from Dense Body Parts [117.46290013548533]
We propose a Decompose-and-aggregate Network (DaNet) to learn 3D human shape and pose from dense correspondences of body parts.
Messages from local streams are aggregated to enhance the robust prediction of the rotation-based poses.
Our method is validated on both indoor and real-world datasets including Human3.6M, UP3D, COCO, and 3DPW.
arXiv Detail & Related papers (2019-12-31T15:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.