Unsupervised Learning of High-resolution Light Field Imaging via Beam
Splitter-based Hybrid Lenses
- URL: http://arxiv.org/abs/2402.19020v1
- Date: Thu, 29 Feb 2024 10:30:02 GMT
- Title: Unsupervised Learning of High-resolution Light Field Imaging via Beam
Splitter-based Hybrid Lenses
- Authors: Jianxin Lei, Chengcai Xu, Langqing Shi, Junhui Hou, Ping Zhou
- Abstract summary: We design a beam splitter-based hybrid light field imaging prototype to record 4D light field image and high-resolution 2D image simultaneously.
The 2D image could be considered as the high-resolution ground truth corresponding to the low-resolution central sub-aperture image of 4D light field image.
We propose an unsupervised learning-based super-resolution framework with the hybrid light field dataset.
- Score: 42.5604477188514
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we design a beam splitter-based hybrid light field imaging
prototype to record 4D light field image and high-resolution 2D image
simultaneously, and make a hybrid light field dataset. The 2D image could be
considered as the high-resolution ground truth corresponding to the
low-resolution central sub-aperture image of 4D light field image.
Subsequently, we propose an unsupervised learning-based super-resolution
framework with the hybrid light field dataset, which adaptively settles the
light field spatial super-resolution problem with a complex degradation model.
Specifically, we design two loss functions based on pre-trained models that
enable the super-resolution network to learn the detailed features and light
field parallax structure with only one ground truth. Extensive experiments
demonstrate the same superiority of our approach with supervised learning-based
state-of-the-art ones. To our knowledge, it is the first end-to-end
unsupervised learning-based spatial super-resolution approach in light field
imaging research, whose input is available from our beam splitter-based hybrid
light field system. The hardware and software together may help promote the
application of light field super-resolution to a great extent.
Related papers
- Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - ImmersiveNeRF: Hybrid Radiance Fields for Unbounded Immersive Light
Field Reconstruction [32.722973192853296]
This paper proposes a hybrid radiance field representation for immersive light field reconstruction.
We represent the foreground and background as two separate radiance fields with two different spatial mapping strategies.
We also contribute a novel immersive light field dataset, named THUImmersive, with the potential to achieve much larger space 6DoF immersive rendering effects.
arXiv Detail & Related papers (2023-09-04T05:57:16Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Learning Texture Transformer Network for Light Field Super-Resolution [1.5469452301122173]
We propose a method to improve the spatial resolution of light field images with the aid of the Transformer Network (TTSR)
The results demonstrate around 4 dB to 6 dB PSNR gain over a bicubically resized light field image.
arXiv Detail & Related papers (2022-10-09T15:16:07Z) - Dual-Camera Super-Resolution with Aligned Attention Modules [56.54073689003269]
We present a novel approach to reference-based super-resolution (RefSR) with the focus on dual-camera super-resolution (DCSR)
Our proposed method generalizes the standard patch-based feature matching with spatial alignment operations.
To bridge the domain gaps between real-world images and the training images, we propose a self-supervised domain adaptation strategy.
arXiv Detail & Related papers (2021-09-03T07:17:31Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.