Spline Positional Encoding for Learning 3D Implicit Signed Distance
Fields
- URL: http://arxiv.org/abs/2106.01553v1
- Date: Thu, 3 Jun 2021 02:37:47 GMT
- Title: Spline Positional Encoding for Learning 3D Implicit Signed Distance
Fields
- Authors: Peng-Shuai Wang, Yang Liu, Yu-Qi Yang, Xin Tong
- Abstract summary: Multilayer perceptrons (MLPs) have been successfully used to represent 3D shapes implicitly and compactly.
In this paper, we propose a novel positional encoding scheme, called Spline Positional clouds.
- Score: 18.6244227624508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multilayer perceptrons (MLPs) have been successfully used to represent 3D
shapes implicitly and compactly, by mapping 3D coordinates to the corresponding
signed distance values or occupancy values. In this paper, we propose a novel
positional encoding scheme, called Spline Positional Encoding, to map the input
coordinates to a high dimensional space before passing them to MLPs, for
helping to recover 3D signed distance fields with fine-scale geometric details
from unorganized 3D point clouds. We verified the superiority of our approach
over other positional encoding schemes on tasks of 3D shape reconstruction from
input point clouds and shape space learning. The efficacy of our approach
extended to image reconstruction is also demonstrated and evaluated.
Related papers
- POC-SLT: Partial Object Completion with SDF Latent Transformers [1.5999407512883512]
3D geometric shape completion hinges on representation learning and a deep understanding of geometric data.
We propose a transformer operating on the latent space representing Signed Distance Fields (SDFs)
Instead of a monolithic volume, the SDF of an object is partitioned into smaller high-resolution patches leading to a sequence of latent codes.
arXiv Detail & Related papers (2024-11-08T09:13:20Z) - NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized
Device Coordinates Space [77.6067460464962]
Monocular 3D Semantic Scene Completion (SSC) has garnered significant attention in recent years due to its potential to predict complex semantics and geometry shapes from a single image, requiring no 3D inputs.
We identify several critical issues in current state-of-the-art methods, including the Feature Ambiguity of projected 2D features in the ray to the 3D space, the Pose Ambiguity of the 3D convolution, and the Imbalance in the 3D convolution across different depth levels.
We devise a novel Normalized Device Coordinates scene completion network (NDC-Scene) that directly extends the 2
arXiv Detail & Related papers (2023-09-26T02:09:52Z) - Neural Voting Field for Camera-Space 3D Hand Pose Estimation [106.34750803910714]
We present a unified framework for camera-space 3D hand pose estimation from a single RGB image based on 3D implicit representation.
We propose a novel unified 3D dense regression scheme to estimate camera-space 3D hand pose via dense 3D point-wise voting in camera frustum.
arXiv Detail & Related papers (2023-05-07T16:51:34Z) - Coordinates Are NOT Lonely -- Codebook Prior Helps Implicit Neural 3D
Representations [29.756718435405983]
Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis.
Existing approaches, such as Neural Radiance Field (NeRF) and its variants, usually require dense input views.
We introduce a novel coordinate-based model, CoCo-INR, for implicit neural 3D representation.
arXiv Detail & Related papers (2022-10-20T11:13:50Z) - CorrI2P: Deep Image-to-Point Cloud Registration via Dense Correspondence [51.91791056908387]
We propose the first feature-based dense correspondence framework for addressing the image-to-point cloud registration problem, dubbed CorrI2P.
Specifically, given a pair of a 2D image before a 3D point cloud, we first transform them into high-dimensional feature space feed the features into a symmetric overlapping region to determine the region where the image point cloud overlap.
arXiv Detail & Related papers (2022-07-12T11:49:31Z) - Learning Anchored Unsigned Distance Functions with Gradient Direction
Alignment for Single-view Garment Reconstruction [92.23666036481399]
We propose a novel learnable Anchored Unsigned Distance Function (AnchorUDF) representation for 3D garment reconstruction from a single image.
AnchorUDF represents 3D shapes by predicting unsigned distance fields (UDFs) to enable open garment surface modeling at arbitrary resolution.
arXiv Detail & Related papers (2021-08-19T03:45:38Z) - KAPLAN: A 3D Point Descriptor for Shape Completion [80.15764700137383]
KAPLAN is a 3D point descriptor that aggregates local shape information via a series of 2D convolutions.
In each of those planes, point properties like normals or point-to-plane distances are aggregated into a 2D grid and abstracted into a feature representation with an efficient 2D convolutional encoder.
Experiments on public datasets show that KAPLAN achieves state-of-the-art performance for 3D shape completion.
arXiv Detail & Related papers (2020-07-31T21:56:08Z) - Local Implicit Grid Representations for 3D Scenes [24.331110387905962]
We introduce Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality.
We train an autoencoder to learn an embedding of local crops of 3D shapes at that size.
Then, we use the decoder as a component in a shape optimization that solves for a set of latent codes on a regular grid of overlapping crops.
arXiv Detail & Related papers (2020-03-19T18:58:13Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.