Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization
- URL: http://arxiv.org/abs/2004.02215v1
- Date: Sun, 5 Apr 2020 14:39:57 GMT
- Title: Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization
- Authors: Jing Jin and Junhui Hou and Jie Chen and Sam Kwong
- Abstract summary: Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
- Score: 99.96632216070718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field (LF) images acquired by hand-held devices usually suffer from low
spatial resolution as the limited sampling resources have to be shared with the
angular dimension. LF spatial super-resolution (SR) thus becomes an
indispensable part of the LF camera processing pipeline. The
high-dimensionality characteristic and complex geometrical structure of LF
images make the problem more challenging than traditional single-image SR. The
performance of existing methods is still limited as they fail to thoroughly
explore the coherence among LF views and are insufficient in accurately
preserving the parallax structure of the scene. In this paper, we propose a
novel learning-based LF spatial SR framework, in which each view of an LF image
is first individually super-resolved by exploring the complementary information
among views with combinatorial geometry embedding. For accurate preservation of
the parallax structure among the reconstructed views, a regularization network
trained over a structure-aware loss function is subsequently appended to
enforce correct parallax relationships over the intermediate estimation. Our
proposed approach is evaluated over datasets with a large number of testing
images including both synthetic and real-world scenes. Experimental results
demonstrate the advantage of our approach over state-of-the-art methods, i.e.,
our method not only improves the average PSNR by more than 1.0 dB but also
preserves more accurate parallax details, at a lower computational cost.
Related papers
- Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Physics-Informed Ensemble Representation for Light-Field Image
Super-Resolution [12.156009287223382]
We analyze the coordinate transformation of the light field (LF) imaging process to reveal the geometric relationship in the LF images.
We introduce a new LF subspace of virtual-slit images (VSI) that provide sub-pixel information complementary to sub-aperture images.
To super-resolve image structures from undersampled LF data, we propose a geometry-aware decoder, named EPIXformer.
arXiv Detail & Related papers (2023-05-31T16:27:00Z) - Scale-Consistent Fusion: from Heterogeneous Local Sampling to Global
Immersive Rendering [9.893045525907219]
Image-based geometric modeling and novel view synthesis based on sparse, large-baseline samplings are challenging but important tasks for emerging multimedia applications such as virtual reality and immersive telepresence.
With the popularization of commercial light field (LF) cameras, capturing LF images (LFIs) is as convenient as taking regular photos, and geometry information can be reliably inferred.
We propose a novel scale-consistent volume rescaling algorithm that robustly aligns the disparity probability volumes (DPV) among different captures for scale-consistent global geometry fusion.
arXiv Detail & Related papers (2021-06-17T14:27:08Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - A Parallel Down-Up Fusion Network for Salient Object Detection in
Optical Remote Sensing Images [82.87122287748791]
We propose a novel Parallel Down-up Fusion network (PDF-Net) for salient object detection in optical remote sensing images (RSIs)
It takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds.
Experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-10-02T05:27:57Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - High-Order Residual Network for Light Field Super-Resolution [39.93400777363467]
Plenoptic cameras usually sacrifice the spatial resolution of their SAIss to acquire information from different viewpoints.
We propose a novel high-order residual network to learn the geometric features hierarchically from the light field for reconstruction.
Our approach enables high-quality reconstruction even in challenging regions and outperforms state-of-the-art single image or LF reconstruction methods with both quantitative measurements and visual evaluation.
arXiv Detail & Related papers (2020-03-29T18:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.