Unsupervised Learning of Intrinsic Structural Representation Points
- URL: http://arxiv.org/abs/2003.01661v2
- Date: Thu, 26 Mar 2020 11:54:35 GMT
- Title: Unsupervised Learning of Intrinsic Structural Representation Points
- Authors: Nenglun Chen, Lingjie Liu, Zhiming Cui, Runnan Chen, Duygu Ceylan,
Changhe Tu, Wenping Wang
- Abstract summary: Learning structures of 3D shapes is a fundamental problem in the field of computer graphics and geometry processing.
We present a simple yet interpretable unsupervised method for learning a new structural representation in the form of 3D structure points.
- Score: 50.92621061405056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning structures of 3D shapes is a fundamental problem in the field of
computer graphics and geometry processing. We present a simple yet
interpretable unsupervised method for learning a new structural representation
in the form of 3D structure points. The 3D structure points produced by our
method encode the shape structure intrinsically and exhibit semantic
consistency across all the shape instances with similar structures. This is a
challenging goal that has not fully been achieved by other methods.
Specifically, our method takes a 3D point cloud as input and encodes it as a
set of local features. The local features are then passed through a novel point
integration module to produce a set of 3D structure points. The chamfer
distance is used as reconstruction loss to ensure the structure points lie
close to the input point cloud. Extensive experiments have shown that our
method outperforms the state-of-the-art on the semantic shape correspondence
task and achieves comparable performance with the state-of-the-art on the
segmentation label transfer task. Moreover, the PCA based shape embedding built
upon consistent structure points demonstrates good performance in preserving
the shape structures. Code is available at
https://github.com/NolenChen/3DStructurePoints
Related papers
- Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - 3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow [61.62796058294777]
Reconstructing 3D shape from a single 2D image is a challenging task.
Most of the previous methods still struggle to extract semantic attributes for 3D reconstruction task.
We propose 3DAttriFlow to disentangle and extract semantic attributes through different semantic levels in the input images.
arXiv Detail & Related papers (2022-03-29T02:03:31Z) - RISA-Net: Rotation-Invariant Structure-Aware Network for Fine-Grained 3D
Shape Retrieval [46.02391761751015]
Fine-grained 3D shape retrieval aims to retrieve 3D shapes similar to a query shape in a repository with models belonging to the same class.
We introduce a novel deep architecture, RISA-Net, which learns rotation invariant 3D shape descriptors.
Our method is able to learn the importance of geometric and structural information of all the parts when generating the final compact latent feature of a 3D shape.
arXiv Detail & Related papers (2020-10-02T13:06:12Z) - DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape
Generation [98.96086261213578]
We introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes.
This supports a range of novel shape generation applications with disentangled control, such as of structure (geometry) while keeping geometry (structure) unchanged.
Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2020-08-12T17:06:51Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z) - STD-Net: Structure-preserving and Topology-adaptive Deformation Network
for 3D Reconstruction from a Single Image [27.885717341244014]
3D reconstruction from a single view image is a long-standing prob-lem in computer vision.
In this paper, we propose a novel methodcalled STD-Net to reconstruct the 3D models utilizing the mesh representation.
Experimental results on the images from ShapeNet show that ourproposed STD-Net has better performance than other state-of-the-art methods onreconstructing 3D objects.
arXiv Detail & Related papers (2020-03-07T11:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.