RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency
- URL: http://arxiv.org/abs/2310.19629v2
- Date: Sun, 17 Dec 2023 01:19:13 GMT
- Title: RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency
- Authors: Zhuoman Liu, Bo Yang, Yan Luximon, Ajay Kumar, Jinxi Li
- Abstract summary: We propose a new framework called RayDF to formulate 3D shapes as ray-based neural functions.
Our method achieves a 1000x faster speed than coordinate-based methods to render an 800x800 depth image.
- Score: 10.55497978011315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of continuous 3D shape representations.
The majority of existing successful methods are coordinate-based implicit
neural representations. However, they are inefficient to render novel views or
recover explicit surface points. A few works start to formulate 3D shapes as
ray-based neural functions, but the learned structures are inferior due to the
lack of multi-view geometry consistency. To tackle these challenges, we propose
a new framework called RayDF. It consists of three major components: 1) the
simple ray-surface distance field, 2) the novel dual-ray visibility classifier,
and 3) a multi-view consistency optimization module to drive the learned
ray-surface distances to be multi-view geometry consistent. We extensively
evaluate our method on three public datasets, demonstrating remarkable
performance in 3D surface point reconstruction on both synthetic and
challenging real-world 3D scenes, clearly surpassing existing coordinate-based
and ray-based baselines. Most notably, our method achieves a 1000x faster speed
than coordinate-based methods to render an 800x800 depth image, showing the
superiority of our method for 3D shape representation. Our code and data are
available at https://github.com/vLAR-group/RayDF
Related papers
- 3D Neural Edge Reconstruction [61.10201396044153]
We introduce EMAP, a new method for learning 3D edge representations with a focus on both lines and curves.
Our method implicitly encodes 3D edge distance and direction in Unsigned Distance Functions (UDF) from multi-view edge maps.
On top of this neural representation, we propose an edge extraction algorithm that robustly abstracts 3D edges from the inferred edge points and their directions.
arXiv Detail & Related papers (2024-05-29T17:23:51Z) - X-Ray: A Sequential 3D Representation For Generation [54.160173837582796]
We introduce X-Ray, a novel 3D sequential representation inspired by x-ray scans.
X-Ray transforms a 3D object into a series of surface frames at different layers, making it suitable for generating 3D models from images.
arXiv Detail & Related papers (2024-04-22T16:40:11Z) - LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions [5.056545768004376]
Implicit 3D surface reconstruction of an object from its partial and noisy 3D point cloud scan is the classical geometry processing and 3D computer vision problem.
We propose a neural network architecture for learning the linear implicit shape representation of the 3D surface of an object.
The proposed approach achieves better Chamfer distance and comparable F-score than the state-of-the-art approach on the benchmark dataset.
arXiv Detail & Related papers (2024-02-11T20:42:49Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - FIRe: Fast Inverse Rendering using Directional and Signed Distance
Functions [97.5540646069663]
We introduce a novel neural scene representation that we call the directional distance function (DDF)
Our DDF is defined on the unit sphere and predicts the distance to the surface along any given direction.
Based on our DDF, we present a novel fast algorithm (FIRe) to reconstruct 3D shapes given a posed depth map.
arXiv Detail & Related papers (2022-03-30T13:24:04Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Improved Modeling of 3D Shapes with Multi-view Depth Maps [48.8309897766904]
We present a general-purpose framework for modeling 3D shapes using CNNs.
Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects.
arXiv Detail & Related papers (2020-09-07T17:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.