Refractive Light-Field Features for Curved Transparent Objects in
Structure from Motion
- URL: http://arxiv.org/abs/2103.15349v1
- Date: Mon, 29 Mar 2021 05:55:32 GMT
- Title: Refractive Light-Field Features for Curved Transparent Objects in
Structure from Motion
- Authors: Dorian Tsai and Peter Corke and Thierry Peynot and Donald G. Dansereau
- Abstract summary: We propose a novel image feature for light fields that detects and describes the patterns of light refracted through curved transparent objects.
We demonstrate improved structure-from-motion performance in challenging scenes containing refractive objects.
Our method is a critical step towards allowing robots to operate around refractive objects.
- Score: 10.380414189465345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Curved refractive objects are common in the human environment, and have a
complex visual appearance that can cause robotic vision algorithms to fail.
Light-field cameras allow us to address this challenge by capturing the
view-dependent appearance of such objects in a single exposure. We propose a
novel image feature for light fields that detects and describes the patterns of
light refracted through curved transparent objects. We derive characteristic
points based on these features allowing them to be used in place of
conventional 2D features. Using our features, we demonstrate improved
structure-from-motion performance in challenging scenes containing refractive
objects, including quantitative evaluations that show improved camera pose
estimates and 3D reconstructions. Additionally, our methods converge 15-35%
more frequently than the state-of-the-art. Our method is a critical step
towards allowing robots to operate around refractive objects, with applications
in manufacturing, quality assurance, pick-and-place, and domestic robots
working with acrylic, glass and other transparent materials.
Related papers
- PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning [38.72679977945778]
We use multi-view aerial images to reconstruct the geometry, lighting, and material of facades using neural signed distance fields (SDFs)
The experiment demonstrates the superior quality of our method on facade holistic inverse rendering, novel view synthesis, and scene editing compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-11-20T15:03:56Z) - Tabletop Transparent Scene Reconstruction via Epipolar-Guided Optical
Flow with Monocular Depth Completion Prior [14.049778178534588]
We introduce a two-stage pipeline for reconstructing transparent objects tailored for mobile platforms.
Epipolar-guided Optical Flow (EOF) to fuse several single-view shape priors to a cross-view consistent 3D reconstruction.
Our pipeline significantly outperforms baseline methods in 3D reconstruction quality.
arXiv Detail & Related papers (2023-10-15T21:30:06Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - ORCa: Glossy Objects as Radiance Field Cameras [23.75324754684283]
We convert glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective.
We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings.
Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.
arXiv Detail & Related papers (2022-12-08T19:32:08Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z) - Seeing the World in a Bag of Chips [73.561388215585]
We address the dual problems of novel view synthesis and environment reconstruction from hand-held RGBD sensors.
Our contributions include 1) modeling highly specular objects, 2) modeling inter-reflections and Fresnel effects, and 3) enabling surface light field reconstruction with the same input needed to reconstruct shape alone.
arXiv Detail & Related papers (2020-01-14T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.