Epipolar Focus Spectrum: A Novel Light Field Representation and
Application in Dense-view Reconstruction
- URL: http://arxiv.org/abs/2204.00193v1
- Date: Fri, 1 Apr 2022 04:01:46 GMT
- Title: Epipolar Focus Spectrum: A Novel Light Field Representation and
Application in Dense-view Reconstruction
- Authors: Yaning Li, Xue Wang, Hao Zhu, Guoqing Zhou, and Qing Wang
- Abstract summary: Existing light field representations, such as epipolar plane image (EPI) and sub-aperture images, do not consider the structural characteristics across the views.
This paper proposes a novel Epipolar Focus Spectrum (EFS) representation by rearranging the EPI spectrum.
- Score: 12.461169608271812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing light field representations, such as epipolar plane image (EPI) and
sub-aperture images, do not consider the structural characteristics across the
views, so they usually require additional disparity and spatial structure cues
for follow-up tasks. Besides, they have difficulties dealing with occlusions or
larger disparity scenes. To this end, this paper proposes a novel Epipolar
Focus Spectrum (EFS) representation by rearranging the EPI spectrum. Different
from the classical EPI representation where an EPI line corresponds to a
specific depth, there is a one-to-one mapping from the EFS line to the view.
Accordingly, compared to a sparsely-sampled light field, a densely-sampled one
with the same field of view (FoV) leads to a more compact distribution of such
linear structures in the double-cone-shaped region with the identical opening
angle in its corresponding EFS. Hence the EFS representation is invariant to
the scene depth. To demonstrate its effectiveness, we develop a trainable
EFS-based pipeline for light field reconstruction, where a dense light field
can be reconstructed by compensating the "missing EFS lines" given a sparse
light field, yielding promising results with cross-view consistency, especially
in the presence of severe occlusion and large disparity. Experimental results
on both synthetic and real-world datasets demonstrate the validity and
superiority of the proposed method over SOTA methods.
Related papers
- Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Exploring Invariant Representation for Visible-Infrared Person
Re-Identification [77.06940947765406]
Cross-spectral person re-identification, which aims to associate identities to pedestrians across different spectra, faces a main challenge of the modality discrepancy.
In this paper, we address the problem from both image-level and feature-level in an end-to-end hybrid learning framework named robust feature mining network (RFM)
Experiment results on two standard cross-spectral person re-identification datasets, RegDB and SYSU-MM01, have demonstrated state-of-the-art performance.
arXiv Detail & Related papers (2023-02-02T05:24:50Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Deep Anti-aliasing of Whole Focal Stack Using its Slice Spectrum [7.746179634773142]
The paper aims at removing the aliasing effects for the whole focal stack generated from a sparse 3D light field.
We first explore the structural characteristics embedded in the focal stack slice and its corresponding frequency-domain representation.
We also observe that the energy distribution of FSS always locates within the same triangular area under different angular sampling rates.
arXiv Detail & Related papers (2021-01-23T05:14:49Z) - EPI-based Oriented Relation Networks for Light Field Depth Estimation [13.120247042876175]
We propose an end-to-end fully convolutional network (FCN) to estimate the depth value of the intersection point on the horizontal and vertical Epipolar Plane Image (EPI)
We present a new feature-extraction module, called Oriented Relation Module (ORM), that constructs the relationship between the line orientations.
To facilitate training, we also propose a refocusing-based data augmentation method to obtain different slopes from EPIs of the same scene point.
arXiv Detail & Related papers (2020-07-09T03:39:09Z) - Spatial-Angular Attention Network for Light Field Reconstruction [64.27343801968226]
We propose a spatial-angular attention network to perceive correspondences in the light field non-locally.
Motivated by the non-local attention mechanism, a spatial-angular attention module is introduced to compute the responses from all the positions in the epipolar plane for each pixel in the light field.
We then propose a multi-scale reconstruction structure to efficiently implement the non-local attention in the low spatial scale.
arXiv Detail & Related papers (2020-07-05T06:55:29Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.