Spectral Reconstruction and Disparity from Spatio-Spectrally Coded Light
Fields via Multi-Task Deep Learning
- URL: http://arxiv.org/abs/2103.10179v1
- Date: Thu, 18 Mar 2021 11:28:05 GMT
- Title: Spectral Reconstruction and Disparity from Spatio-Spectrally Coded Light
Fields via Multi-Task Deep Learning
- Authors: Maximilian Schambach, Jiayang Shi, Michael Heizmann
- Abstract summary: We reconstruct a spectral central view and its map aligned from-spectrally coded light fields.
The coded light fields correspond to those captured by a light field camera in the unfocused design.
We achieve a high reconstruction quality for both synthetic and real-world coded light fields.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel method to reconstruct a spectral central view and its
aligned disparity map from spatio-spectrally coded light fields. Since we do
not reconstruct an intermediate full light field from the coded measurement, we
refer to this as principal reconstruction. The coded light fields correspond to
those captured by a light field camera in the unfocused design with a
spectrally coded microlens array. In this application, the spectrally coded
light field camera can be interpreted as a single-shot spectral depth camera.
We investigate several multi-task deep learning methods and propose a new
auxiliary loss-based training strategy to enhance the reconstruction
performance. The results are evaluated using a synthetic as well as a new
real-world spectral light field dataset that we captured using a custom-built
camera. The results are compared to state-of-the art compressed sensing
reconstruction and disparity estimation.
We achieve a high reconstruction quality for both synthetic and real-world
coded light fields. The disparity estimation quality is on par with or even
outperforms state-of-the-art disparity estimation from uncoded RGB light
fields.
Related papers
- Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Learning Kernel-Modulated Neural Representation for Efficient Light
Field Compression [41.24757573290883]
We design a compact neural network representation for the light field compression task.
It is composed of two types of complementary kernels: descriptive kernels (descriptors) that store scene description information learned during training, and modulatory kernels (modulators) that control the rendering of different SAIs from the queried perspectives.
arXiv Detail & Related papers (2023-07-12T12:58:03Z) - Detail-Preserving Transformer for Light Field Image Super-Resolution [15.53525700552796]
We put forth a novel formulation built upon Transformers, by treating light field super-resolution as a sequence-to-sequence reconstruction task.
We propose a detail-preserving Transformer (termed as DPT), by leveraging gradient maps of light field to guide the sequence learning.
DPT consists of two branches, with each associated with a Transformer for learning from an original or gradient image sequence.
arXiv Detail & Related papers (2022-01-02T12:33:23Z) - Learning-Based Practical Light Field Image Compression Using A
Disparity-Aware Model [1.5229257192293197]
We propose a new learning-based, disparity-aided model for compression of 4D light field images.
The model is end-to-end trainable, eliminating the need for hand-tuning separate modules.
Comparisons with the state of the art show encouraging performance in terms of PSNR and MS-SSIM metrics.
arXiv Detail & Related papers (2021-06-22T06:30:25Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Learning Light Field Angular Super-Resolution via a Geometry-Aware
Network [101.59693839475783]
We propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline.
Our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48$times$.
arXiv Detail & Related papers (2020-02-26T02:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.