Probabilistic-based Feature Embedding of 4-D Light Fields for
Compressive Imaging and Denoising
- URL: http://arxiv.org/abs/2306.08836v3
- Date: Thu, 11 Jan 2024 03:26:59 GMT
- Title: Probabilistic-based Feature Embedding of 4-D Light Fields for
Compressive Imaging and Denoising
- Authors: Xianqiang Lyu and Junhui Hou
- Abstract summary: 4-D light field (LF) poses great challenges in achieving efficient and effective feature embedding.
We propose a probabilistic-based feature embedding (PFE), which learns a feature embedding architecture by assembling various low-dimensional convolution patterns.
Our experiments demonstrate the significant superiority of our methods on both real-world and synthetic 4-D LF images.
- Score: 62.347491141163225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high-dimensional nature of the 4-D light field (LF) poses great
challenges in achieving efficient and effective feature embedding, that
severely impacts the performance of downstream tasks. To tackle this crucial
issue, in contrast to existing methods with empirically-designed architectures,
we propose a probabilistic-based feature embedding (PFE), which learns a
feature embedding architecture by assembling various low-dimensional
convolution patterns in a probability space for fully capturing spatial-angular
information. Building upon the proposed PFE, we then leverage the intrinsic
linear imaging model of the coded aperture camera to construct a
cycle-consistent 4-D LF reconstruction network from coded measurements.
Moreover, we incorporate PFE into an iterative optimization framework for 4-D
LF denoising. Our extensive experiments demonstrate the significant superiority
of our methods on both real-world and synthetic 4-D LF images, both
quantitatively and qualitatively, when compared with state-of-the-art methods.
The source code will be publicly available at
https://github.com/lyuxianqiang/LFCA-CR-NET.
Related papers
- LGFN: Lightweight Light Field Image Super-Resolution using Local Convolution Modulation and Global Attention Feature Extraction [5.461017270708014]
We propose a lightweight model named LGFN which integrates the local and global features of different views and the features of different channels for LF image SR.
Our model has a parameter of 0.45M and a FLOPs of 19.33G which has achieved a competitive effect.
arXiv Detail & Related papers (2024-09-26T11:53:25Z) - Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Physics-Informed Ensemble Representation for Light-Field Image
Super-Resolution [12.156009287223382]
We analyze the coordinate transformation of the light field (LF) imaging process to reveal the geometric relationship in the LF images.
We introduce a new LF subspace of virtual-slit images (VSI) that provide sub-pixel information complementary to sub-aperture images.
To super-resolve image structures from undersampled LF data, we propose a geometry-aware decoder, named EPIXformer.
arXiv Detail & Related papers (2023-05-31T16:27:00Z) - Disentangling Light Fields for Super-Resolution and Disparity Estimation [67.50796924758221]
Light field (LF) cameras record both intensity and directions of light rays, and encode 3D scenes into 4D LF images.
It is challenging for convolutional neural networks (CNNs) to process LF images since the spatial and angular information are highly inter-twined with varying disparities.
We propose a generic mechanism to disentangle these coupled information for LF image processing.
arXiv Detail & Related papers (2022-02-22T01:04:41Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.