Learning Kernel-Modulated Neural Representation for Efficient Light
Field Compression
- URL: http://arxiv.org/abs/2307.06143v1
- Date: Wed, 12 Jul 2023 12:58:03 GMT
- Title: Learning Kernel-Modulated Neural Representation for Efficient Light
Field Compression
- Authors: Jinglei Shi and Yihong Xu and Christine Guillemot
- Abstract summary: We design a compact neural network representation for the light field compression task.
It is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives.
- Score: 41.24757573290883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light field is a type of image data that captures the 3D scene information by
recording light rays emitted from a scene at various orientations. It offers a
more immersive perception than classic 2D images but at the cost of huge data
volume. In this paper, we draw inspiration from the visual characteristics of
Sub-Aperture Images (SAIs) of light field and design a compact neural network
representation for the light field compression task. The network backbone takes
randomly initialized noise as input and is supervised on the SAIs of the target
light field. It is composed of two types of complementary kernels: descriptive
kernels (descriptors) that store scene description information learned during
training, and modulatory kernels (modulators) that control the rendering of
different SAIs from the queried perspectives. To further enhance compactness of
the network meanwhile retain high quality of the decoded light field, we
accordingly introduce modulator allocation and kernel tensor decomposition
mechanisms, followed by non-uniform quantization and lossless entropy coding
techniques, to finally form an efficient compression pipeline. Extensive
experiments demonstrate that our method outperforms other state-of-the-art
(SOTA) methods by a significant margin in the light field compression task.
Moreover, after aligning descriptors, the modulators learned from one light
field can be transferred to new light fields for rendering dense views,
indicating a potential solution for view synthesis task.
Related papers
- Light Field Compression Based on Implicit Neural Representation [10.320292226135306]
We propose a novel light field compression scheme based on implicit neural representation to reduce redundancies between views.
We store the information of a light field image implicitly in an neural network and adopt model compression methods to further compress the implicit representation.
arXiv Detail & Related papers (2024-05-07T12:17:46Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Multi-Plane Neural Radiance Fields for Novel View Synthesis [5.478764356647437]
Novel view synthesis is a long-standing problem that revolves around rendering frames of scenes from novel camera viewpoints.
In this work, we examine the performance, generalization, and efficiency of single-view multi-plane neural radiance fields.
We propose a new multiplane NeRF architecture that accepts multiple views to improve the synthesis results and expand the viewing range.
arXiv Detail & Related papers (2023-03-03T06:32:55Z) - Scattering-induced entropy boost for highly-compressed optical sensing and encryption [7.502671257653539]
Image sensing often relies on a high-quality machine vision system with a large field of view and high resolution.
We propose a novel image-free sensing framework for resource-efficient image classification.
The proposed framework is shown to obtain over a 95% accuracy at sampling rates of 1% and 5% for classification on the MNIST dataset.
arXiv Detail & Related papers (2022-12-16T09:00:42Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Learning-Based Practical Light Field Image Compression Using A
Disparity-Aware Model [1.5229257192293197]
We propose a new learning-based, disparity-aided model for compression of 4D light field images.
The model is end-to-end trainable, eliminating the need for hand-tuning separate modules.
Comparisons with the state of the art show encouraging performance in terms of PSNR and MS-SSIM metrics.
arXiv Detail & Related papers (2021-06-22T06:30:25Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.