Disentangling Light Fields for Super-Resolution and Disparity Estimation
- URL: http://arxiv.org/abs/2202.10603v5
- Date: Sat, 22 Jul 2023 09:46:15 GMT
- Title: Disentangling Light Fields for Super-Resolution and Disparity Estimation
- Authors: Yingqian Wang, Longguang Wang, Gaochang Wu, Jungang Yang, Wei An,
Jingyi Yu, Yulan Guo
- Abstract summary: Light field (LF) cameras record both intensity and directions of light rays, and encode 3D scenes into 4D LF images.
It is challenging for convolutional neural networks (CNNs) to process LF images since the spatial and angular information are highly inter-twined with varying disparities.
We propose a generic mechanism to disentangle these coupled information for LF image processing.
- Score: 67.50796924758221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field (LF) cameras record both intensity and directions of light rays,
and encode 3D scenes into 4D LF images. Recently, many convolutional neural
networks (CNNs) have been proposed for various LF image processing tasks.
However, it is challenging for CNNs to effectively process LF images since the
spatial and angular information are highly inter-twined with varying
disparities. In this paper, we propose a generic mechanism to disentangle these
coupled information for LF image processing. Specifically, we first design a
class of domain-specific convolutions to disentangle LFs from different
dimensions, and then leverage these disentangled features by designing
task-specific modules. Our disentangling mechanism can well incorporate the LF
structure prior and effectively handle 4D LF data. Based on the proposed
mechanism, we develop three networks (i.e., DistgSSR, DistgASR and DistgDisp)
for spatial super-resolution, angular super-resolution and disparity
estimation. Experimental results show that our networks achieve
state-of-the-art performance on all these three tasks, which demonstrates the
effectiveness, efficiency, and generality of our disentangling mechanism.
Project page: https://yingqianwang.github.io/DistgLF/.
Related papers
- LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction [5.461017270708014]
We propose a lightweight model named LGFN which integrates the local and global features of different views and the features of different channels for LF image SR.
Our model has a parameter of 0.45M and a FLOPs of 19.33G which has achieved a competitive effect.
arXiv Detail & Related papers (2024-09-26T11:53:25Z) - LFIC-DRASC: Deep Light Field Image Compression Using Disentangled Representation and Asymmetrical Strip Convolution [51.909036244222904]
We propose an end-to-end deep LF Image Compression method using Disentangled Representation and Asymmetrical Strip Convolution.
Experimental results demonstrate that the proposed LFIC-DRASC achieves an average of 20.5% bit rate reductions.
arXiv Detail & Related papers (2024-09-18T05:33:42Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Probabilistic-based Feature Embedding of 4-D Light Fields for
Compressive Imaging and Denoising [62.347491141163225]
4-D light field (LF) poses great challenges in achieving efficient and effective feature embedding.
We propose a probabilistic-based feature embedding (PFE), which learns a feature embedding architecture by assembling various low-dimensional convolution patterns.
Our experiments demonstrate the significant superiority of our methods on both real-world and synthetic 4-D LF images.
arXiv Detail & Related papers (2023-06-15T03:46:40Z) - Physics-Informed Ensemble Representation for Light-Field Image
Super-Resolution [12.156009287223382]
We analyze the coordinate transformation of the light field (LF) imaging process to reveal the geometric relationship in the LF images.
We introduce a new LF subspace of virtual-slit images (VSI) that provide sub-pixel information complementary to sub-aperture images.
To super-resolve image structures from undersampled LF data, we propose a geometry-aware decoder, named EPIXformer.
arXiv Detail & Related papers (2023-05-31T16:27:00Z) - Learning Non-Local Spatial-Angular Correlation for Light Field Image
Super-Resolution [36.69391399634076]
Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR)
We propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR.
Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line.
arXiv Detail & Related papers (2023-02-16T03:40:40Z) - Multi-Projection Fusion and Refinement Network for Salient Object
Detection in 360{\deg} Omnidirectional Image [141.10227079090419]
We propose a Multi-Projection Fusion and Refinement Network (MPFR-Net) to detect the salient objects in 360deg omnidirectional image.
MPFR-Net uses the equirectangular projection image and four corresponding cube-unfolding images as inputs.
Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-23T14:50:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.