Baseline and Triangulation Geometry in a Standard Plenoptic Camera
- URL: http://arxiv.org/abs/2010.04638v2
- Date: Wed, 20 Jan 2021 12:02:36 GMT
- Title: Baseline and Triangulation Geometry in a Standard Plenoptic Camera
- Authors: Christopher Hahne, Amar Aggoun, Vladan Velisavljevic, Susanne Fiebig,
Matthias Pesch
- Abstract summary: We present a geometrical light field model allowing triangulation to be applied to a plenoptic camera.
It is shown that distance estimates from our novel method match those of real objects placed in front of the camera.
- Score: 6.719751155411075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we demonstrate light field triangulation to determine depth
distances and baselines in a plenoptic camera. Advances in micro lenses and
image sensors have enabled plenoptic cameras to capture a scene from different
viewpoints with sufficient spatial resolution. While object distances can be
inferred from disparities in a stereo viewpoint pair using triangulation, this
concept remains ambiguous when applied in the case of plenoptic cameras. We
present a geometrical light field model allowing the triangulation to be
applied to a plenoptic camera in order to predict object distances or specify
baselines as desired. It is shown that distance estimates from our novel method
match those of real objects placed in front of the camera. Additional benchmark
tests with an optical design software further validate the model's accuracy
with deviations of less than +-0.33 % for several main lens types and focus
settings. A variety of applications in the automotive and robotics field can
benefit from this estimation model.
Related papers
- Metric3Dv2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation [74.28509379811084]
Metric3D v2 is a geometric foundation model for zero-shot metric depth and surface normal estimation from a single image.
We propose solutions for both metric depth estimation and surface normal estimation.
Our method enables the accurate recovery of metric 3D structures on randomly collected internet images.
arXiv Detail & Related papers (2024-03-22T02:30:46Z) - Can you see me now? Blind spot estimation for autonomous vehicles using
scenario-based simulation with random reference sensors [5.910402196056647]
A Monte Carlo-based reference sensor simulation enables us to accurately estimate blind spot size as a metric of coverage.
Our method leverages point clouds from LiDAR sensors or camera depth images from high-fidelity simulations of target scenarios to provide accurate and actionable visibility estimates.
arXiv Detail & Related papers (2024-02-01T10:14:53Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Ray Tracing-Guided Design of Plenoptic Cameras [1.1421942894219896]
The design of a plenoptic camera requires the combination of two dissimilar optical systems.
We present a method to calculate the remaining aperture, sensor and microlens array parameters under different sets of constraints.
Our ray tracing-based approach is shown to result in models outperforming their pendants generated with the commonly used paraxial approximations.
arXiv Detail & Related papers (2022-03-09T11:57:00Z) - Facial Depth and Normal Estimation using Single Dual-Pixel Camera [81.02680586859105]
We introduce a DP-oriented Depth/Normal network that reconstructs the 3D facial geometry.
It contains the corresponding ground-truth 3D models including depth map and surface normal in metric scale.
It achieves state-of-the-art performances over recent DP-based depth/normal estimation methods.
arXiv Detail & Related papers (2021-11-25T05:59:27Z) - 3D-Aware Ellipse Prediction for Object-Based Camera Pose Estimation [3.103806775802078]
We propose a method for coarse camera pose computation which is robust to viewing conditions.
It exploits the ability of deep learning techniques to reliably detect objects regardless of viewing conditions.
arXiv Detail & Related papers (2021-05-24T18:40:18Z) - LUCES: A Dataset for Near-Field Point Light Source Photometric Stereo [30.31403197697561]
We introduce LUCES, the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of a varying of materials.
A device counting 52 LEDs has been designed to lit each object positioned 10 to 30 centimeters away from the camera.
We evaluate the performance of the latest near-field Photometric Stereo algorithms on the proposed dataset.
arXiv Detail & Related papers (2021-04-27T12:30:42Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.