Learning Deep Implicit Functions for 3D Shapes with Dynamic Code Clouds
- URL: http://arxiv.org/abs/2203.14048v2
- Date: Wed, 30 Mar 2022 03:15:32 GMT
- Title: Learning Deep Implicit Functions for 3D Shapes with Dynamic Code Clouds
- Authors: Tianyang Li, Xin Wen, Yu-Shen Liu, Hua Su, Zhizhong Han
- Abstract summary: Deep Implicit Function (DIF) has gained popularity as an efficient 3D shape representation.
We propose to learn DIF with Dynamic Code Cloud, named DCC-DIF.
Our method explicitly associates local codes with learnable position vectors, and the position vectors are continuous and can be dynamically optimized.
- Score: 56.385495276042406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Implicit Function (DIF) has gained popularity as an efficient 3D shape
representation. To capture geometry details, current methods usually learn DIF
using local latent codes, which discretize the space into a regular 3D grid (or
octree) and store local codes in grid points (or octree nodes). Given a query
point, the local feature is computed by interpolating its neighboring local
codes with their positions. However, the local codes are constrained at
discrete and regular positions like grid points, which makes the code positions
difficult to be optimized and limits their representation ability. To solve
this problem, we propose to learn DIF with Dynamic Code Cloud, named DCC-DIF.
Our method explicitly associates local codes with learnable position vectors,
and the position vectors are continuous and can be dynamically optimized, which
improves the representation ability. In addition, we propose a novel code
position loss to optimize the code positions, which heuristically guides more
local codes to be distributed around complex geometric details. In contrast to
previous methods, our DCC-DIF represents 3D shapes more efficiently with a
small amount of local codes, and improves the reconstruction quality.
Experiments demonstrate that DCC-DIF achieves better performance over previous
methods. Code and data are available at https://github.com/lity20/DCCDIF.
Related papers
- Oriented-grid Encoder for 3D Implicit Representations [10.02138130221506]
This paper is the first to exploit 3D characteristics in 3D geometric encoders explicitly.
Our method gets state-of-the-art results when compared to the prior techniques.
arXiv Detail & Related papers (2024-02-09T19:28:13Z) - ConDaFormer: Disassembled Transformer with Local Structure Enhancement
for 3D Point Cloud Understanding [105.98609765389895]
Transformers have been recently explored for 3D point cloud understanding.
A large number of points, over 0.1 million, make the global self-attention infeasible for point cloud data.
In this paper, we develop a new transformer block, named ConDaFormer.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - ALSTER: A Local Spatio-Temporal Expert for Online 3D Semantic
Reconstruction [62.599588577671796]
We propose an online 3D semantic segmentation method that incrementally reconstructs a 3D semantic map from a stream of RGB-D frames.
Unlike offline methods, ours is directly applicable to scenarios with real-time constraints, such as robotics or mixed reality.
arXiv Detail & Related papers (2023-11-29T20:30:18Z) - Quadric Representations for LiDAR Odometry, Mapping and Localization [93.24140840537912]
Current LiDAR odometry, mapping and localization methods leverage point-wise representations of 3D scenes.
We propose a novel method of describing scenes using quadric surfaces, which are far more compact representations of 3D objects.
Our method maintains low latency and memory utility while achieving competitive, and even superior, accuracy.
arXiv Detail & Related papers (2023-04-27T13:52:01Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - Distinctive 3D local deep descriptors [2.512827436728378]
Point cloud patches are extracted, canonicalised with respect to their estimated local reference frame and encoded by a PointNet-based deep neural network.
We evaluate and compare DIPs against alternative hand-crafted and deep descriptors on several datasets consisting of point clouds reconstructed using different sensors.
arXiv Detail & Related papers (2020-09-01T06:25:06Z) - DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF
Relocalization [56.15308829924527]
We propose a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points.
For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner.
Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration.
arXiv Detail & Related papers (2020-07-17T20:21:22Z) - Local Implicit Grid Representations for 3D Scenes [24.331110387905962]
We introduce Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality.
We train an autoencoder to learn an embedding of local crops of 3D shapes at that size.
Then, we use the decoder as a component in a shape optimization that solves for a set of latent codes on a regular grid of overlapping crops.
arXiv Detail & Related papers (2020-03-19T18:58:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.