MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part
Disentanglement
- URL: http://arxiv.org/abs/2007.12944v1
- Date: Sat, 25 Jul 2020 14:41:51 GMT
- Title: MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part
Disentanglement
- Authors: Rinon Gal, Amit Bermano, Hao Zhang, Daniel Cohen-Or
- Abstract summary: We present MRGAN, a multi-rooted adversarial network which generates part-disentangled 3D point-cloud shapes without part-based shape supervision.
The network fuses multiple branches of tree-structured graph convolution layers which produce point clouds, with learnable constant inputs at the tree roots.
- Score: 49.05682172235875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present MRGAN, a multi-rooted adversarial network which generates
part-disentangled 3D point-cloud shapes without part-based shape supervision.
The network fuses multiple branches of tree-structured graph convolution layers
which produce point clouds, with learnable constant inputs at the tree roots.
Each branch learns to grow a different shape part, offering control over the
shape generation at the part level. Our network encourages disentangled
generation of semantic parts via two key ingredients: a root-mixing training
strategy which helps decorrelate the different branches to facilitate
disentanglement, and a set of loss terms designed with part disentanglement and
shape semantics in mind. Of these, a novel convexity loss incentivizes the
generation of parts that are more convex, as semantic parts tend to be. In
addition, a root-dropping loss further ensures that each root seeds a single
part, preventing the degeneration or over-growth of the point-producing
branches. We evaluate the performance of our network on a number of 3D shape
classes, and offer qualitative and quantitative comparisons to previous works
and baseline approaches. We demonstrate the controllability offered by our
part-disentangled generation through two applications for shape modeling: part
mixing and individual part variation, without receiving segmented shapes as
input.
Related papers
- DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross
Diffusion [68.39543754708124]
We introduce DiffFacto, a novel probabilistic generative model that learns the distribution of shapes with part-level control.
Experiments show that our method is able to generate novel shapes with multiple axes of control.
It achieves state-of-the-art part-level generation quality and generates plausible and coherent shapes.
arXiv Detail & Related papers (2023-05-03T06:38:35Z) - Point Cloud Semantic Segmentation using Multi Scale Sparse Convolution
Neural Network [0.0]
We propose a feature extraction module based on multi-scale ultra-sparse convolution and a feature selection module based on channel attention.
By introducing multi-scale sparse convolution, network could capture richer feature information based on convolution kernels of different sizes.
arXiv Detail & Related papers (2022-05-03T15:01:20Z) - CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised
Point Cloud Learning [53.1436669083784]
We propose a generic Contour-Perturbed Reconstruction Network (CP-Net), which can effectively guide self-supervised reconstruction to learn semantic content in the point cloud.
For classification, we get a competitive result with the fully-supervised methods on ModelNet40 (92.5% accuracy) and ScanObjectNN (87.9% accuracy)
arXiv Detail & Related papers (2022-01-20T15:04:12Z) - EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape
Generation [19.817166425038753]
This paper tackles the problem of parts-aware point cloud generation.
A simple modification of the Variational Auto-Encoder yields a joint model of the point cloud itself.
In addition to the flexibility afforded by our disentangled representation, the inductive bias introduced by our joint modelling approach yields the state-of-the-art experimental results on the ShapeNet dataset.
arXiv Detail & Related papers (2021-10-13T12:38:01Z) - Unsupervised Learning for Cuboid Shape Abstraction via Joint
Segmentation from Point Clouds [8.156355030558172]
Representing complex 3D objects as simple geometric primitives, known as shape abstraction, is important for geometric modeling, structural analysis, and shape synthesis.
We propose an unsupervised shape abstraction method to map a point cloud into a compact cuboid representation.
arXiv Detail & Related papers (2021-06-07T09:15:16Z) - Learning Geometry-Disentangled Representation for Complementary
Understanding of 3D Object Point Cloud [50.56461318879761]
We propose Geometry-Disentangled Attention Network (GDANet) for 3D image processing.
GDANet disentangles point clouds into contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components.
Experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters.
arXiv Detail & Related papers (2020-12-20T13:35:00Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - A Progressive Conditional Generative Adversarial Network for Generating
Dense and Colored 3D Point Clouds [5.107705550575662]
We introduce a novel conditional generative adversarial network that creates dense 3D point clouds, with color, for assorted classes of objects in an unsupervised manner.
To overcome the difficulty of capturing intricate details at high resolutions, we propose a point transformer that progressively grows the network through the use of graph convolutions.
Experimental results show that our network is capable of learning and mimicking a 3D data distribution, and produces colored point clouds with fine details at multiple resolutions.
arXiv Detail & Related papers (2020-10-12T01:32:13Z) - PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree
Conditions [66.87405921626004]
This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation.
We propose a conditional GAN "part tree"-to-"point cloud" model (PT2PC) that disentangles the structural and geometric factors.
arXiv Detail & Related papers (2020-03-19T08:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.