Learning Local Neighboring Structure for Robust 3D Shape Representation
- URL: http://arxiv.org/abs/2004.09995v3
- Date: Mon, 21 Dec 2020 13:32:12 GMT
- Title: Learning Local Neighboring Structure for Robust 3D Shape Representation
- Authors: Zhongpai Gao, Junchi Yan, Guangtao Zhai, Juyong Zhang, Yiyan Yang,
Xiaokang Yang
- Abstract summary: Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
- Score: 143.15904669246697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mesh is a powerful data structure for 3D shapes. Representation learning for
3D meshes is important in many computer vision and graphics applications. The
recent success of convolutional neural networks (CNNs) for structured data
(e.g., images) suggests the value of adapting insight from CNN for 3D shapes.
However, 3D shape data are irregular since each node's neighbors are unordered.
Various graph neural networks for 3D shapes have been developed with isotropic
filters or predefined local coordinate systems to overcome the node
inconsistency on graphs. However, isotropic filters or predefined local
coordinate systems limit the representation power. In this paper, we propose a
local structure-aware anisotropic convolutional operation (LSA-Conv) that
learns adaptive weighting matrices for each node according to the local
neighboring structure and performs shared anisotropic filters. In fact, the
learnable weighting matrix is similar to the attention matrix in the random
synthesizer -- a new Transformer model for natural language processing (NLP).
Comprehensive experiments demonstrate that our model produces significant
improvement in 3D shape reconstruction compared to state-of-the-art methods.
Related papers
- MeT: A Graph Transformer for Semantic Segmentation of 3D Meshes [10.667492516216887]
We propose a transformer-based method for semantic segmentation of 3D mesh.
We perform positional encoding by means of the Laplacian eigenvectors of the adjacency matrix.
We show how the proposed approach yields state-of-the-art performance on semantic segmentation of 3D meshes.
arXiv Detail & Related papers (2023-07-03T15:45:14Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Dense Graph Convolutional Neural Networks on 3D Meshes for 3D Object
Segmentation and Classification [0.0]
We present new designs of graph convolutional neural networks (GCNs) on 3D meshes for 3D object classification and segmentation.
We use the faces of the mesh as basic processing units and represent a 3D mesh as a graph where each node corresponds to a face.
arXiv Detail & Related papers (2021-06-30T02:17:16Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Rotation-Invariant Local-to-Global Representation Learning for 3D Point
Cloud [42.86112554931754]
We propose a local-to-global representation learning algorithm for 3D point cloud data.
Our model takes advantage of multi-level abstraction based on graph convolutional neural networks.
The proposed algorithm presents the state-of-the-art performance on the rotation-augmented 3D object recognition and segmentation benchmarks.
arXiv Detail & Related papers (2020-10-07T10:30:20Z) - SeqXY2SeqZ: Structure Learning for 3D Shapes by Sequentially Predicting
1D Occupancy Segments From 2D Coordinates [61.04823927283092]
We propose to represent 3D shapes using 2D functions, where the output of the function at each 2D location is a sequence of line segments inside the shape.
We implement this approach using a Seq2Seq model with attention, called SeqXY2SeqZ, which learns the mapping from a sequence of 2D coordinates along two arbitrary axes to a sequence of 1D locations along the third axis.
Our experiments show that SeqXY2SeqZ outperforms the state-ofthe-art methods under widely used benchmarks.
arXiv Detail & Related papers (2020-03-12T00:24:36Z) - 3D Shape Segmentation with Geometric Deep Learning [2.512827436728378]
We propose a neural-network based approach that produces 3D augmented views of the 3D shape to solve the whole segmentation as sub-segmentation problems.
We validate our approach using 3D shapes of publicly available datasets and of real objects that are reconstructed using photogrammetry techniques.
arXiv Detail & Related papers (2020-02-02T14:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.