3DVSR: 3D EPI Volume-based Approach for Angular and Spatial Light field
Image Super-resolution
- URL: http://arxiv.org/abs/2201.01294v1
- Date: Tue, 4 Jan 2022 18:57:00 GMT
- Title: 3DVSR: 3D EPI Volume-based Approach for Angular and Spatial Light field
Image Super-resolution
- Authors: Trung-Hieu Tran, Jan Berberich, Sven Simon
- Abstract summary: This paper presents a learning-based approach applied to 3D epipolar image (EPI) to reconstruct high-resolution light field.
An extensive evaluation on 90 challenging synthetic and real-world light field scenes from 7 published datasets shows that the proposed approach outperforms state-of-the-art methods.
- Score: 2.127049691404299
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Light field (LF) imaging, which captures both spatial and angular information
of a scene, is undoubtedly beneficial to numerous applications. Although
various techniques have been proposed for LF acquisition, achieving both
angularly and spatially high-resolution LF remains a technology challenge. In
this paper, a learning-based approach applied to 3D epipolar image (EPI) is
proposed to reconstruct high-resolution LF. Through a 2-stage super-resolution
framework, the proposed approach effectively addresses various LF
super-resolution (SR) problems, i.e., spatial SR, angular SR, and
angular-spatial SR. While the first stage provides flexible options to
up-sample EPI volume to the desired resolution, the second stage, which
consists of a novel EPI volume-based refinement network (EVRN), substantially
enhances the quality of the high-resolution EPI volume. An extensive evaluation
on 90 challenging synthetic and real-world light field scenes from 7 published
datasets shows that the proposed approach outperforms state-of-the-art methods
to a large extend for both spatial and angular super-resolution problem, i.e.,
an average peak signal to noise ratio improvement of more than 2.0 dB, 1.4 dB,
and 3.14 dB in spatial SR $\times 2$, spatial SR $\times 4$, and angular SR
respectively. The reconstructed 4D light field demonstrates a balanced
performance distribution across all perspective images and presents superior
visual quality compared to the previous works.
Related papers
- Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Learning Non-Local Spatial-Angular Correlation for Light Field Image
Super-Resolution [36.69391399634076]
Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR)
We propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR.
Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line.
arXiv Detail & Related papers (2023-02-16T03:40:40Z) - Learning Texture Transformer Network for Light Field Super-Resolution [1.5469452301122173]
We propose a method to improve the spatial resolution of light field images with the aid of the Transformer Network (TTSR)
The results demonstrate around 4 dB to 6 dB PSNR gain over a bicubically resized light field image.
arXiv Detail & Related papers (2022-10-09T15:16:07Z) - Sub-Aperture Feature Adaptation in Single Image Super-resolution Model
for Light Field Imaging [17.721259583120396]
This paper proposes an adaptation module in a pretrained Single Image Super Resolution (SISR) network to leverage the powerful SISR model.
It is an adaptation in the SISR network to further exploit the spatial and angular information in LF images to improve the super resolution performance.
arXiv Detail & Related papers (2022-07-25T03:43:56Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.