Learning light field synthesis with Multi-Plane Images: scene encoding
as a recurrent segmentation task
- URL: http://arxiv.org/abs/2002.05028v3
- Date: Tue, 19 May 2020 11:25:09 GMT
- Title: Learning light field synthesis with Multi-Plane Images: scene encoding
as a recurrent segmentation task
- Authors: Tom\'as V\"olker, Guillaume Boisson, Bertrand Chupeau
- Abstract summary: This paper addresses the problem of view synthesis from large baseline light fields by turning a sparse set of input views into a Multi-plane Image (MPI)
Because available datasets are scarce, we propose a lightweight network that does not require extensive training.
Our model does not learn to estimate RGB layers but only encodes the scene geometry within MPI alpha layers, which comes down to a segmentation task.
- Score: 30.058283056074426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we address the problem of view synthesis from large baseline
light fields, by turning a sparse set of input views into a Multi-plane Image
(MPI). Because available datasets are scarce, we propose a lightweight network
that does not require extensive training. Unlike latest approaches, our model
does not learn to estimate RGB layers but only encodes the scene geometry
within MPI alpha layers, which comes down to a segmentation task. A Learned
Gradient Descent (LGD) framework is used to cascade the same convolutional
network in a recurrent fashion in order to refine the volumetric representation
obtained. Thanks to its low number of parameters, our model trains successfully
on a small light field video dataset and provides visually appealing results.
It also exhibits convenient generalization properties regarding both the number
of input views, the number of depth planes in the MPI, and the number of
refinement iterations.
Related papers
- MAIR++: Improving Multi-view Attention Inverse Rendering with Implicit Lighting Representation [17.133440382384578]
We propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, SVBRDF, and 3D spatially-varying lighting.
A novel framework called Multi-view Attention Inverse Rendering (MAIR) was recently introduced to improve the quality of scene-level inverse rendering.
arXiv Detail & Related papers (2024-08-13T08:04:23Z) - MuRF: Multi-Baseline Radiance Fields [117.55811938988256]
We present Multi-Baseline Radiance Fields (MuRF), a feed-forward approach to solving sparse view synthesis.
MuRF achieves state-of-the-art performance across multiple different baseline settings.
We also show promising zero-shot generalization abilities on the Mip-NeRF 360 dataset.
arXiv Detail & Related papers (2023-12-07T18:59:56Z) - ClusVPR: Efficient Visual Place Recognition with Clustering-based
Weighted Transformer [13.0858576267115]
We present ClusVPR, a novel approach that tackles the specific issues of redundant information in duplicate regions and representations of small objects.
ClusVPR introduces a unique paradigm called Clustering-based weighted Transformer Network (CWTNet)
We also introduce the optimized-VLAD layer that significantly reduces the number of parameters and enhances model efficiency.
arXiv Detail & Related papers (2023-10-06T09:01:15Z) - SAMPLING: Scene-adaptive Hierarchical Multiplane Images Representation
for Novel View Synthesis from a Single Image [60.52991173059486]
We introduce SAMPLING, a Scene-adaptive Hierarchical Multiplane Images Representation for Novel View Synthesis from a Single Image.
Our method demonstrates considerable performance gains in large-scale unbounded outdoor scenes using a single image on the KITTI dataset.
arXiv Detail & Related papers (2023-09-12T15:33:09Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - DeLiRa: Self-Supervised Depth, Light, and Radiance Fields [32.350984950639656]
Differentiable volumetric rendering is a powerful paradigm for 3D reconstruction and novel view synthesis.
Standard volume rendering approaches struggle with degenerate geometries in the case of limited viewpoint diversity.
In this work, we propose to use the multi-view photometric objective as a geometric regularizer for volumetric rendering.
arXiv Detail & Related papers (2023-04-06T00:16:25Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Modeling Image Composition for Complex Scene Generation [77.10533862854706]
We present a method that achieves state-of-the-art results on layout-to-image generation tasks.
After compressing RGB images into patch tokens, we propose the Transformer with Focal Attention (TwFA) for exploring dependencies of object-to-object, object-to-patch and patch-to-patch.
arXiv Detail & Related papers (2022-06-02T08:34:25Z) - Single-View View Synthesis in the Wild with Learned Adaptive Multiplane
Images [15.614631883233898]
Existing methods have shown promising results leveraging monocular depth estimation and color inpainting with layered depth representations.
We propose a new method based on the multiplane image (MPI) representation.
The experiments on both synthetic and real datasets demonstrate that our trained model works remarkably well and achieves state-of-the-art results.
arXiv Detail & Related papers (2022-05-24T02:57:16Z) - Detail-Preserving Transformer for Light Field Image Super-Resolution [15.53525700552796]
We put forth a novel formulation built upon Transformers, by treating light field super-resolution as a sequence-to-sequence reconstruction task.
We propose a detail-preserving Transformer (termed as DPT), by leveraging gradient maps of light field to guide the sequence learning.
DPT consists of two branches, with each associated with a Transformer for learning from an original or gradient image sequence.
arXiv Detail & Related papers (2022-01-02T12:33:23Z) - Scalable Visual Transformers with Hierarchical Pooling [61.05787583247392]
We propose a Hierarchical Visual Transformer (HVT) which progressively pools visual tokens to shrink the sequence length.
It brings a great benefit by scaling dimensions of depth/width/resolution/patch size without introducing extra computational complexity.
Our HVT outperforms the competitive baselines on ImageNet and CIFAR-100 datasets.
arXiv Detail & Related papers (2021-03-19T03:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.