Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a
Transparent Container
- URL: http://arxiv.org/abs/2303.13805v1
- Date: Fri, 24 Mar 2023 04:58:27 GMT
- Title: Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a
Transparent Container
- Authors: Jinguang Tong, Sundaram Muthu, Fahira Afzal Maken, Chuong Nguyen,
Hongdong Li
- Abstract summary: Transparent enclosures pose challenges of multiple light reflections and refractions at the interface between different propagation media.
We use an existing neural reconstruction method (NeuS) that implicitly represents the geometry and appearance of the inner subspace.
In order to account for complex light interactions, we develop a hybrid rendering strategy that combines volume rendering with ray tracing.
- Score: 61.50401406132946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we define a new problem of recovering the 3D geometry of an
object confined in a transparent enclosure. We also propose a novel method for
solving this challenging problem. Transparent enclosures pose challenges of
multiple light reflections and refractions at the interface between different
propagation media e.g. air or glass. These multiple reflections and refractions
cause serious image distortions which invalidate the single viewpoint
assumption. Hence the 3D geometry of such objects cannot be reliably
reconstructed using existing methods, such as traditional structure from motion
or modern neural reconstruction methods. We solve this problem by explicitly
modeling the scene as two distinct sub-spaces, inside and outside the
transparent enclosure. We use an existing neural reconstruction method (NeuS)
that implicitly represents the geometry and appearance of the inner subspace.
In order to account for complex light interactions, we develop a hybrid
rendering strategy that combines volume rendering with ray tracing. We then
recover the underlying geometry and appearance of the model by minimizing the
difference between the real and hybrid rendered images. We evaluate our method
on both synthetic and real data. Experiment results show that our method
outperforms the state-of-the-art (SOTA) methods. Codes and data will be
available at https://github.com/hirotong/ReNeuS
Related papers
- Multi-times Monte Carlo Rendering for Inter-reflection Reconstruction [51.911195773164245]
Inverse rendering methods have achieved remarkable performance in reconstructing high-fidelity 3D objects with disentangled geometries, materials, and environmental light.
We propose Ref-MC2 that introduces the multi-time Monte Carlo sampling which comprehensively computes the environmental illumination.
We also show downstream applications, e.g., relighting and material editing, to illustrate the disentanglement ability of our method.
arXiv Detail & Related papers (2024-07-08T09:27:34Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - NeTO:Neural Reconstruction of Transparent Objects with Self-Occlusion
Aware Refraction-Tracing [44.22576861939435]
We present a novel method, called NeTO, for capturing 3D geometry of solid transparent objects from 2D images via volume rendering.
Our method achieves faithful reconstruction results and outperforms prior works by a large margin.
arXiv Detail & Related papers (2023-03-20T15:50:00Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Towards High-Fidelity Single-view Holistic Reconstruction of Indoor
Scenes [50.317223783035075]
We present a new framework to reconstruct holistic 3D indoor scenes from single-view images.
We propose an instance-aligned implicit function (InstPIFu) for detailed object reconstruction.
Our code and model will be made publicly available.
arXiv Detail & Related papers (2022-07-18T14:54:57Z) - SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data [77.53134858717728]
We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF)
We apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity.
We show that it can reconstruct unobserved regions with high fidelity and render high-quality novel view images.
arXiv Detail & Related papers (2022-06-13T17:37:50Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing [16.975014467319443]
Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
arXiv Detail & Related papers (2020-12-06T18:55:35Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.