Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail
- URL: http://arxiv.org/abs/2309.10336v1
- Date: Tue, 19 Sep 2023 05:44:00 GMT
- Title: Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail
- Authors: Yiyu Zhuang, Qi Zhang, Ying Feng, Hao Zhu, Yao Yao, Xiaoyu Li, Yan-Pei
Cao, Ying Shan, Xun Cao
- Abstract summary: We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
- Score: 54.03399077258403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present LoD-NeuS, an efficient neural representation for high-frequency
geometry detail recovery and anti-aliased novel view rendering. Drawing
inspiration from voxel-based representations with the level of detail (LoD), we
introduce a multi-scale tri-plane-based scene representation that is capable of
capturing the LoD of the signed distance function (SDF) and the space radiance.
Our representation aggregates space features from a multi-convolved
featurization within a conical frustum along a ray and optimizes the LoD
feature volume through differentiable rendering. Additionally, we propose an
error-guided sampling strategy to guide the growth of the SDF during the
optimization. Both qualitative and quantitative evaluations demonstrate that
our method achieves superior surface reconstruction and photorealistic view
synthesis compared to state-of-the-art approaches.
Related papers
- LiftRefine: Progressively Refined View Synthesis from 3D Lifting with Volume-Triplane Representations [21.183524347952762]
We propose a new view synthesis method via a 3D neural field from both single or few-view input images.
Our reconstruction model first lifts one or more input images to the 3D space from a volume as the coarse-scale 3D representation.
Our diffusion model then hallucinates missing details in the rendered images from tri-planes.
arXiv Detail & Related papers (2024-12-19T02:23:55Z) - ProbeSDF: Light Field Probes for Neural Surface Reconstruction [4.0130618054041385]
SDF-based differential rendering frameworks have achieved state-of-the-art multiview 3D shape reconstruction.
We re-examine this family of approaches by minimally reformulating its core appearance model.
We show this performance to be consistently achieved on real data over two widely different and popular application fields.
arXiv Detail & Related papers (2024-12-13T12:18:26Z) - G2SDF: Surface Reconstruction from Explicit Gaussians with Implicit SDFs [84.07233691641193]
We introduce G2SDF, a novel approach that integrates a neural implicit Signed Distance Field into the Gaussian Splatting framework.
G2SDF achieves superior quality than prior works while maintaining the efficiency of 3DGS.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - Anisotropic Neural Representation Learning for High-Quality Neural
Rendering [0.0]
We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
arXiv Detail & Related papers (2023-11-30T07:29:30Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.