Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations
- URL: http://arxiv.org/abs/2205.02825v2
- Date: Fri, 6 May 2022 05:02:12 GMT
- Title: Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations
- Authors: Peng-Shuai Wang, Yang Liu, Xin Tong
- Abstract summary: Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
- Score: 21.59311861556396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an adaptive deep representation of volumetric fields of 3D shapes
and an efficient approach to learn this deep representation for high-quality 3D
shape reconstruction and auto-encoding. Our method encodes the volumetric field
of a 3D shape with an adaptive feature volume organized by an octree and
applies a compact multilayer perceptron network for mapping the features to the
field value at each 3D position. An encoder-decoder network is designed to
learn the adaptive feature volume based on the graph convolutions over the dual
graph of octree nodes. The core of our network is a new graph convolution
operator defined over a regular grid of features fused from irregular
neighboring octree nodes at different levels, which not only reduces the
computational and memory cost of the convolutions over irregular neighboring
octree nodes, but also improves the performance of feature learning. Our method
effectively encodes shape details, enables fast 3D shape reconstruction, and
exhibits good generality for modeling 3D shapes out of training categories. We
evaluate our method on a set of reconstruction tasks of 3D shapes and scenes
and validate its superiority over other existing approaches. Our code, data,
and trained models are available at https://wang-ps.github.io/dualocnn.
Related papers
- Locally Adaptive Neural 3D Morphable Models [38.38400553022714]
We present the Locally Adaptive Morphable Model (LAMM), a framework for learning to generate and manipulate 3D meshes.
A very efficient computational graph allows our network to train with only a fraction of the memory required by previous methods.
We further leverage local geometry control as a primitive for higher level editing operations and present a set of derivative capabilities.
arXiv Detail & Related papers (2024-01-05T18:28:51Z) - Spatial-Spectral Hyperspectral Classification based on Learnable 3D
Group Convolution [18.644268589334217]
This paper proposes a learnable group convolution network (LGCNet) based on an improved 3D-DenseNet model and a lightweight model design.
The LGCNet module improves the shortcomings of group convolution by introducing a dynamic learning method for the input channels and convolution kernel grouping.
LGCNet has achieved progress in inference speed and accuracy, and outperforms mainstream hyperspectral image classification methods on the Indian Pines, Pavia University, and KSC datasets.
arXiv Detail & Related papers (2023-07-15T05:47:12Z) - DGCNet: An Efficient 3D-Densenet based on Dynamic Group Convolution for
Hyperspectral Remote Sensing Image Classification [22.025733502296035]
We introduce a lightweight model based on the improved 3D-Densenet model and designs DGCNet.
Multiple groups can capture different and complementary visual and semantic features of input images, allowing convolution neural network(CNN) to learn rich features.
The inference speed and accuracy have been improved, with outstanding performance on the IN, Pavia and KSC datasets.
arXiv Detail & Related papers (2023-07-13T10:19:48Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Training Data Generating Networks: Shape Reconstruction via Bi-level
Optimization [52.17872739634213]
We propose a novel 3d shape representation for 3d shape reconstruction from a single image.
We train a network to generate a training set which will be fed into another learning algorithm to define the shape.
arXiv Detail & Related papers (2020-10-16T09:52:13Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z) - 3D Shape Segmentation with Geometric Deep Learning [2.512827436728378]
We propose a neural-network based approach that produces 3D augmented views of the 3D shape to solve the whole segmentation as sub-segmentation problems.
We validate our approach using 3D shapes of publicly available datasets and of real objects that are reconstructed using photogrammetry techniques.
arXiv Detail & Related papers (2020-02-02T14:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.