Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network
- URL: http://arxiv.org/abs/2104.06797v1
- Date: Wed, 14 Apr 2021 12:03:25 GMT
- Title: Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network
- Authors: Gaochang Wu, Yebin Liu, Lu Fang, Tianyou Chai
- Abstract summary: In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques.
First, we analytically show that the essential issue behind the large disparity and non-Lambertian challenges is the aliasing problem.
We introduce an alternative framework to perform anti-aliasing reconstruction in the image domain and analytically show comparable efficacy on the aliasing issue.
- Score: 51.90655635745856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The light field (LF) reconstruction is mainly confronted with two challenges,
large disparity and the non-Lambertian effect. Typical approaches either
address the large disparity challenge using depth estimation followed by view
synthesis or eschew explicit depth information to enable non-Lambertian
rendering, but rarely solve both challenges in a unified framework. In this
paper, we revisit the classic LF rendering framework to address both challenges
by incorporating it with advanced deep learning techniques. First, we
analytically show that the essential issue behind the large disparity and
non-Lambertian challenges is the aliasing problem. Classic LF rendering
approaches typically mitigate the aliasing with a reconstruction filter in the
Fourier domain, which is, however, intractable to implement within a deep
learning pipeline. Instead, we introduce an alternative framework to perform
anti-aliasing reconstruction in the image domain and analytically show
comparable efficacy on the aliasing issue. To explore the full potential, we
then embed the anti-aliasing framework into a deep neural network through the
design of an integrated architecture and trainable parameters. The network is
trained through end-to-end optimization using a peculiar training set,
including regular LFs and unstructured LFs. The proposed deep learning pipeline
shows a substantial superiority in solving both the large disparity and the
non-Lambertian challenges compared with other state-of-the-art approaches. In
addition to the view interpolation for an LF, we also show that the proposed
pipeline also benefits light field view extrapolation.
Related papers
- Drantal-NeRF: Diffusion-Based Restoration for Anti-aliasing Neural Radiance Field [10.225323718645022]
Aliasing artifacts in renderings produced by Neural Radiance Field (NeRF) is a long-standing but complex issue.
We present a Diffusion-based restoration method for anti-aliasing Neural Radiance Field (Drantal-NeRF)
arXiv Detail & Related papers (2024-07-10T08:32:13Z) - Diffusion-based Light Field Synthesis [50.24624071354433]
LFdiff is a diffusion-based generative framework tailored for LF synthesis.
We propose DistgUnet, a disentanglement-based noise estimation network, to harness comprehensive LF representations.
Extensive experiments demonstrate that LFdiff excels in synthesizing visually pleasing and disparity-controllable light fields.
arXiv Detail & Related papers (2024-02-01T13:13:16Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Deep Selective Combinatorial Embedding and Consistency Regularization
for Light Field Super-resolution [93.95828097088608]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensionality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF spatial SR framework to explore the coherence among LF sub-aperture images.
Experimental results over both synthetic and real-world LF datasets demonstrate the significant advantage of our approach over state-of-the-art methods.
arXiv Detail & Related papers (2020-09-26T08:34:37Z) - Light Field Spatial Super-resolution via Deep Combinatorial Geometry
Embedding and Structural Consistency Regularization [99.96632216070718]
Light field (LF) images acquired by hand-held devices usually suffer from low spatial resolution.
The high-dimensional spatiality characteristic and complex geometrical structure of LF images make the problem more challenging than traditional single-image SR.
We propose a novel learning-based LF framework, in which each view of an LF image is first individually super-resolved.
arXiv Detail & Related papers (2020-04-05T14:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.