Learning-Based Practical Light Field Image Compression Using A
Disparity-Aware Model
- URL: http://arxiv.org/abs/2106.11558v2
- Date: Wed, 23 Jun 2021 04:45:09 GMT
- Title: Learning-Based Practical Light Field Image Compression Using A
Disparity-Aware Model
- Authors: Mohana Singh and Renu M. Rameshan
- Abstract summary: We propose a new learning-based, disparity-aided model for compression of 4D light field images.
The model is end-to-end trainable, eliminating the need for hand-tuning separate modules.
Comparisons with the state of the art show encouraging performance in terms of PSNR and MS-SSIM metrics.
- Score: 1.5229257192293197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Light field technology has increasingly attracted the attention of the
research community with its many possible applications. The lenslet array in
commercial plenoptic cameras helps capture both the spatial and angular
information of light rays in a single exposure. While the resulting high
dimensionality of light field data enables its superior capabilities, it also
impedes its extensive adoption. Hence, there is a compelling need for efficient
compression of light field images. Existing solutions are commonly composed of
several separate modules, some of which may not have been designed for the
specific structure and quality of light field data. This increases the
complexity of the codec and results in impractical decoding runtimes. We
propose a new learning-based, disparity-aided model for compression of 4D light
field images capable of parallel decoding. The model is end-to-end trainable,
eliminating the need for hand-tuning separate modules and allowing joint
learning of rate and distortion. The disparity-aided approach ensures the
structural integrity of the reconstructed light fields. Comparisons with the
state of the art show encouraging performance in terms of PSNR and MS-SSIM
metrics. Also, there is a notable gain in the encoding and decoding runtimes.
Source code is available at https://moha23.github.io/LF-DAAE.
Related papers
- Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Learning Kernel-Modulated Neural Representation for Efficient Light
Field Compression [41.24757573290883]
We design a compact neural network representation for the light field compression task.
It is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives.
arXiv Detail & Related papers (2023-07-12T12:58:03Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - An Integrated Representation & Compression Scheme Based on Convolutional
Autoencoders with 4D DCT Perceptual Encoding for High Dynamic Range Light
Fields [0.30458514384586394]
Light field size is a major drawback while utilising 3D displays and streaming purposes.
In this paper, we propose a novel compression algorithm for a high dynamic range light field.
The algorithm exploits the inter and intra view correlations of the HDR light field by interpreting it to be a four-dimension volume.
arXiv Detail & Related papers (2022-06-21T06:25:06Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - Spectral Reconstruction and Disparity from Spatio-Spectrally Coded Light
Fields via Multi-Task Deep Learning [0.0]
We reconstruct a spectral central view and its map aligned from-spectrally coded light fields.
The coded light fields correspond to those captured by a light field camera in the unfocused design.
We achieve a high reconstruction quality for both synthetic and real-world coded light fields.
arXiv Detail & Related papers (2021-03-18T11:28:05Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.