OctField: Hierarchical Implicit Functions for 3D Modeling
- URL: http://arxiv.org/abs/2111.01067v1
- Date: Mon, 1 Nov 2021 16:29:39 GMT
- Title: OctField: Hierarchical Implicit Functions for 3D Modeling
- Authors: Jia-Heng Tang, Weikai Chen, Jie Yang, Bo Wang, Songrun Liu, Bo Yang,
Lin Gao
- Abstract summary: We present a learnable hierarchical implicit representation for 3D surfaces, coded OctField, that allows high-precision encoding of intricate surfaces with low memory and computational budget.
We achieve this goal by introducing a hierarchical octree structure to adaptively subdivide the 3D space according to the surface occupancy and the richness of part geometry.
- Score: 18.488778913029805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in localized implicit functions have enabled neural implicit
representation to be scalable to large scenes. However, the regular subdivision
of 3D space employed by these approaches fails to take into account the
sparsity of the surface occupancy and the varying granularities of geometric
details. As a result, its memory footprint grows cubically with the input
volume, leading to a prohibitive computational cost even at a moderately dense
decomposition. In this work, we present a learnable hierarchical implicit
representation for 3D surfaces, coded OctField, that allows high-precision
encoding of intricate surfaces with low memory and computational budget. The
key to our approach is an adaptive decomposition of 3D scenes that only
distributes local implicit functions around the surface of interest. We achieve
this goal by introducing a hierarchical octree structure to adaptively
subdivide the 3D space according to the surface occupancy and the richness of
part geometry. As octree is discrete and non-differentiable, we further propose
a novel hierarchical network that models the subdivision of octree cells as a
probabilistic process and recursively encodes and decodes both octree structure
and surface geometry in a differentiable manner. We demonstrate the value of
OctField for a range of shape modeling and reconstruction tasks, showing
superiority over alternative approaches.
Related papers
- Optimizing 3D Geometry Reconstruction from Implicit Neural Representations [2.3940819037450987]
Implicit neural representations have emerged as a powerful tool in learning 3D geometry.
We present a novel approach that both reduces computational expenses and enhances the capture of fine details.
arXiv Detail & Related papers (2024-10-16T16:36:23Z) - GALA: Geometry-Aware Local Adaptive Grids for Detailed 3D Generation [28.299293407423455]
GALA is a novel representation of 3D shapes that excels at capturing and reproducing complex geometry and surface details.
With our optimized C++/CUDA implementation, GALA can be fitted to an object in less than 10 seconds.
We provide a cascaded generation pipeline capable of generating 3D shapes with great geometric detail.
arXiv Detail & Related papers (2024-10-13T22:53:58Z) - X-3D: Explicit 3D Structure Modeling for Point Cloud Recognition [73.0588783479853]
X-3D is an explicit 3D structure modeling approach.
It captures explicit local structural information within the input 3D space.
It produces dynamic kernels with shared weights for all neighborhood points within the current local region.
arXiv Detail & Related papers (2024-04-23T13:15:35Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - Learning Smooth Neural Functions via Lipschitz Regularization [92.42667575719048]
We introduce a novel regularization designed to encourage smooth latent spaces in neural fields.
Compared with prior Lipschitz regularized networks, ours is computationally fast and can be implemented in four lines of code.
arXiv Detail & Related papers (2022-02-16T21:24:54Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.