A Generative Model for Generic Light Field Reconstruction
- URL: http://arxiv.org/abs/2005.06508v2
- Date: Wed, 17 Jun 2020 17:10:11 GMT
- Title: A Generative Model for Generic Light Field Reconstruction
- Authors: Paramanand Chandramouli, Kanchana Vaishnavi Gandikota, Andreas
Goerlitz, Andreas Kolb, Michael Moeller
- Abstract summary: We present for the first time a generative model for 4D light field patches using variational autoencoders.
We develop a generative model conditioned on the central view of the light field and incorporate this as a prior in an energy minimization framework.
Our proposed method demonstrates good reconstruction, with performance approaching end-to-end trained networks.
- Score: 15.394019131959096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently deep generative models have achieved impressive progress in modeling
the distribution of training data. In this work, we present for the first time
a generative model for 4D light field patches using variational autoencoders to
capture the data distribution of light field patches. We develop a generative
model conditioned on the central view of the light field and incorporate this
as a prior in an energy minimization framework to address diverse light field
reconstruction tasks. While pure learning-based approaches do achieve excellent
results on each instance of such a problem, their applicability is limited to
the specific observation model they have been trained on. On the contrary, our
trained light field generative model can be incorporated as a prior into any
model-based optimization approach and therefore extend to diverse
reconstruction tasks including light field view synthesis, spatial-angular
super resolution and reconstruction from coded projections. Our proposed method
demonstrates good reconstruction, with performance approaching end-to-end
trained networks, while outperforming traditional model-based approaches on
both synthetic and real scenes. Furthermore, we show that our approach enables
reliable light field recovery despite distortions in the input.
Related papers
- Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Data-efficient Large Vision Models through Sequential Autoregression [58.26179273091461]
We develop an efficient, autoregression-based vision model on a limited dataset.
We demonstrate how this model achieves proficiency in a spectrum of visual tasks spanning both high-level and low-level semantic understanding.
Our empirical evaluations underscore the model's agility in adapting to various tasks, heralding a significant reduction in the parameter footprint.
arXiv Detail & Related papers (2024-02-07T13:41:53Z) - RefinedFields: Radiance Fields Refinement for Unconstrained Scenes [7.421845364041002]
We propose RefinedFields, to the best of our knowledge, the first method leveraging pre-trained models to improve in-the-wild scene modeling.
We employ pre-trained networks to refine K-Planes representations via optimization guidance.
We carry out extensive experiments and verify the merit of our method on synthetic data and real tourism photo collections.
arXiv Detail & Related papers (2023-12-01T14:59:43Z) - Discovery and Expansion of New Domains within Diffusion Models [41.25905891327446]
We study the generalization properties of diffusion models in a fewshot setup.
We introduce a novel tuning-free paradigm to synthesize the target out-of-domain data.
arXiv Detail & Related papers (2023-10-13T16:07:31Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - Energy-Inspired Self-Supervised Pretraining for Vision Models [36.70550531181131]
We introduce a self-supervised vision model pretraining framework inspired by energy-based models (EBMs)
In the proposed framework, we model energy estimation and data restoration as the forward and backward passes of a single network.
We show the proposed method delivers comparable and even better performance with remarkably fewer epochs of training.
arXiv Detail & Related papers (2023-02-02T19:41:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.