Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging
- URL: http://arxiv.org/abs/2311.10568v2
- Date: Wed, 10 Apr 2024 02:19:19 GMT
- Title: Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging
- Authors: Geyou Zhang, Ce Zhu, Kai Liu, Yipeng Liu,
- Abstract summary: On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
- Score: 36.208109063579066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On 3D imaging, light field cameras typically are of single shot, and however, they heavily suffer from low spatial resolution and depth accuracy. In this paper, by employing an optical projector to project a group of single high-frequency phase-shifted sinusoid patterns, we propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras. First, for correcting the axial aberrations caused by the main lens of our light field camera, we propose a deformed cone model to calibrate our structured light field system. Second, over wrapped phases computed from patterned images, we propose a stereo matching algorithm, i.e. phase guided sum of absolute difference, to robustly obtain the correspondence for each pair of neighbored two lenslets. Finally, by introducing a virtual camera according to the basic geometrical optics of light field imaging, we propose a reorganization strategy to reconstruct 3D point clouds with spatial-depth high resolution. Experimental results show that, compared with the state-of-the-art active light field methods, the proposed reconstructs 3D point clouds with a spatial resolution of 1280$\times$720 with factors 10$\times$ increased, while maintaining the same high depth resolution and needing merely a single group of high-frequency patterns.
Related papers
- Unsupervised Learning of High-resolution Light Field Imaging via Beam
Splitter-based Hybrid Lenses [42.5604477188514]
We design a beam splitter-based hybrid light field imaging prototype to record 4D light field image and high-resolution 2D image simultaneously.
The 2D image could be considered as the high-resolution ground truth corresponding to the low-resolution central sub-aperture image of 4D light field image.
We propose an unsupervised learning-based super-resolution framework with the hybrid light field dataset.
arXiv Detail & Related papers (2024-02-29T10:30:02Z) - Learning Texture Transformer Network for Light Field Super-Resolution [1.5469452301122173]
We propose a method to improve the spatial resolution of light field images with the aid of the Transformer Network (TTSR)
The results demonstrate around 4 dB to 6 dB PSNR gain over a bicubically resized light field image.
arXiv Detail & Related papers (2022-10-09T15:16:07Z) - Single-Photon Structured Light [31.614032717665832]
"Single-Photon Structured Light" works by sensing binary images that indicates the presence or absence of photon arrivals during each exposure.
We develop novel temporal sequences using error correction codes that are designed to be robust to short-range effects like projector and camera defocus.
Our lab prototype is capable of 3D imaging in challenging scenarios involving objects with extremely low albedo or undergoing fast motion.
arXiv Detail & Related papers (2022-04-11T17:57:04Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Baseline and Triangulation Geometry in a Standard Plenoptic Camera [6.719751155411075]
We present a geometrical light field model allowing triangulation to be applied to a plenoptic camera.
It is shown that distance estimates from our novel method match those of real objects placed in front of the camera.
arXiv Detail & Related papers (2020-10-09T15:31:14Z) - Correlation Plenoptic Imaging between Arbitrary Planes [52.77024349608834]
We show that the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field.
Results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination.
arXiv Detail & Related papers (2020-07-23T14:26:14Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.