From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $α$-NeuS
- URL: http://arxiv.org/abs/2411.05362v2
- Date: Mon, 20 Jan 2025 08:59:02 GMT
- Title: From Transparent to Opaque: Rethinking Neural Implicit Surfaces with $α$-NeuS
- Authors: Haoran Zhang, Junkai Deng, Xuhui Chen, Fei Hou, Wencheng Wang, Hong Qin, Chen Qian, Ying He,
- Abstract summary: This paper introduces $alpha$-Neus -- an extension of NeuS -- that proves NeuS is unbiased for materials from fully transparent to fully opaque.
We develop a method to extract the transparent and opaque surface simultaneously based onDF. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes.
- Score: 41.38491898098891
- License:
- Abstract: Traditional 3D shape reconstruction techniques from multi-view images, such as structure from motion and multi-view stereo, face challenges in reconstructing transparent objects. Recent advances in neural radiance fields and its variants primarily address opaque or transparent objects, encountering difficulties to reconstruct both transparent and opaque objects simultaneously. This paper introduces $\alpha$-Neus -- an extension of NeuS -- that proves NeuS is unbiased for materials from fully transparent to fully opaque. We find that transparent and opaque surfaces align with the non-negative local minima and the zero iso-surface, respectively, in the learned distance field of NeuS. Traditional iso-surfacing extraction algorithms, such as marching cubes, which rely on fixed iso-values, are ill-suited for such data. We develop a method to extract the transparent and opaque surface simultaneously based on DCUDF. To validate our approach, we construct a benchmark that includes both real-world and synthetic scenes, demonstrating its practical utility and effectiveness. Our data and code are publicly available at https://github.com/728388808/alpha-NeuS.
Related papers
- Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - Neural Radiance Fields for Transparent Object Using Visual Hull [0.8158530638728501]
Recently introduced Neural Radiance Fields (NeRF) is a view synthesis method.
We propose a NeRF-based method consisting of the following three steps: First, we reconstruct a three-dimensional shape of a transparent object using visual hull.
Second, we simulate the refraction of the rays inside of the transparent object according to Snell's law. Last, we sample points through refracted rays and put them into NeRF.
arXiv Detail & Related papers (2023-12-13T13:15:19Z) - Transparent Object Tracking with Enhanced Fusion Module [56.403878717170784]
We propose a new tracker architecture that uses our fusion techniques to achieve superior results for transparent object tracking.
Our results and the implementation of code will be made publicly available at https://github.com/kalyan05TOTEM.
arXiv Detail & Related papers (2023-09-13T03:52:09Z) - Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a
Transparent Container [61.50401406132946]
Transparent enclosures pose challenges of multiple light reflections and refractions at the interface between different propagation media.
We use an existing neural reconstruction method (NeuS) that implicitly represents the geometry and appearance of the inner subspace.
In order to account for complex light interactions, we develop a hybrid rendering strategy that combines volume rendering with ray tracing.
arXiv Detail & Related papers (2023-03-24T04:58:27Z) - NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from
Multi-view Images [17.637064969966847]
NeAT is a new neural rendering framework that learns implicit surfaces with arbitrary topologies from multi-view images.
NeAT supports easy field-to-mesh conversion using the classic Marching Cubes algorithm.
Our approach is able to faithfully reconstruct both watertight and non-watertight surfaces.
arXiv Detail & Related papers (2023-03-21T16:49:41Z) - NeTO:Neural Reconstruction of Transparent Objects with Self-Occlusion
Aware Refraction-Tracing [44.22576861939435]
We present a novel method, called NeTO, for capturing 3D geometry of solid transparent objects from 2D images via volume rendering.
Our method achieves faithful reconstruction results and outperforms prior works by a large margin.
arXiv Detail & Related papers (2023-03-20T15:50:00Z) - TransMatting: Enhancing Transparent Objects Matting with Transformers [4.012340049240327]
We propose a Transformer-based network, TransMatting, to model transparent objects with a big receptive field.
A small convolutional network is proposed to utilize the global feature and non-background mask to guide the multi-scale feature propagation from encoder to decoder.
We create a high-resolution matting dataset of transparent objects with small known foreground areas.
arXiv Detail & Related papers (2022-08-05T06:44:14Z) - One Ring to Rule Them All: a simple solution to multi-view
3D-Reconstruction of shapes with unknown BRDF via a small Recurrent ResNet [96.11203962525443]
This paper proposes a simple method which solves an open problem of multi-view 3D-Review for objects with unknown surface materials.
The object can have arbitrary (e.g. non-Lambertian), spatially-varying (or everywhere different) surface reflectances (svBRDF)
Our solution consists of novel-view-synthesis, relighting, material relighting, and shape exchange without additional coding effort.
arXiv Detail & Related papers (2021-04-11T13:39:31Z) - RGB-D Local Implicit Function for Depth Completion of Transparent
Objects [43.238923881620494]
Majority of perception methods in robotics require depth information provided by RGB-D cameras.
Standard 3D sensors fail to capture depth of transparent objects due to refraction and absorption of light.
We present a novel framework that can complete missing depth given noisy RGB-D input.
arXiv Detail & Related papers (2021-04-01T17:00:04Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.