Novel View Synthesis with View-Dependent Effects from a Single Image
- URL: http://arxiv.org/abs/2312.08071v1
- Date: Wed, 13 Dec 2023 11:29:47 GMT
- Title: Novel View Synthesis with View-Dependent Effects from a Single Image
- Authors: Juan Luis Gonzalez Bello and Munchurl Kim
- Abstract summary: We first consider view-dependent effects into single image-based novel view synthesis (NVS) problems.
We propose to exploit the camera motion priors in NVS to model view-dependent appearance or effects (VDE) as the negative disparity in the scene.
We present extensive experiment results and show that our proposed method can learn NVS with VDEs, outperforming the SOTA single-view NVS methods on the RealEstate10k and MannequinChallenge datasets.
- Score: 35.85973300177698
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we firstly consider view-dependent effects into single
image-based novel view synthesis (NVS) problems. For this, we propose to
exploit the camera motion priors in NVS to model view-dependent appearance or
effects (VDE) as the negative disparity in the scene. By recognizing
specularities "follow" the camera motion, we infuse VDEs into the input images
by aggregating input pixel colors along the negative depth region of the
epipolar lines. Also, we propose a `relaxed volumetric rendering' approximation
that allows computing the densities in a single pass, improving efficiency for
NVS from single images. Our method can learn single-image NVS from image
sequences only, which is a completely self-supervised learning method, for the
first time requiring neither depth nor camera pose annotations. We present
extensive experiment results and show that our proposed method can learn NVS
with VDEs, outperforming the SOTA single-view NVS methods on the RealEstate10k
and MannequinChallenge datasets.
Related papers
- NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer [48.57740681957145]
We propose a new novel view synthesis (NVS) paradigm that operates textitwithout the need for training.
NVS-r adaptively modulates the diffusion sampling process with the given views to enable the creation of remarkable visual experiences.
arXiv Detail & Related papers (2024-05-24T08:56:19Z) - Novel View Synthesis from a Single RGBD Image for Indoor Scenes [4.292698270662031]
We propose an approach for synthesizing novel view images from a single RGBD (Red Green Blue-Depth) input.
In our method, we convert an RGBD image into a point cloud and render it from a different viewpoint, then formulate the NVS task into an image translation problem.
arXiv Detail & Related papers (2023-11-02T08:34:07Z) - TOSS:High-quality Text-guided Novel View Synthesis from a Single Image [36.90122394242858]
We present TOSS, which introduces text to the task of novel view synthesis (NVS) from just a single RGB image.
To address this limitation, TOSS uses text as high-level semantic information to constrain the NVS solution space.
arXiv Detail & Related papers (2023-10-16T17:59:09Z) - S-VolSDF: Sparse Multi-View Stereo Regularization of Neural Implicit
Surfaces [75.30792581941789]
Neural rendering of implicit surfaces performs well in 3D vision applications.
When only sparse input images are available, output quality drops significantly due to the shape-radiance ambiguity problem.
We propose to regularize neural rendering optimization with an MVS solution.
arXiv Detail & Related papers (2023-03-30T21:10:58Z) - Enhanced Stable View Synthesis [86.69338893753886]
We introduce an approach to enhance the novel view synthesis from images taken from a freely moving camera.
The introduced approach focuses on outdoor scenes where recovering accurate geometric scaffold and camera pose is challenging.
arXiv Detail & Related papers (2023-03-30T01:53:14Z) - Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods
for Front-Facing Views [10.565297375544414]
We present the first study on perceptual evaluation of NVS and NeRF variants.
We measured the quality of videos synthesized by several NVS methods in a well-controlled perceptual quality assessment experiment.
arXiv Detail & Related papers (2023-03-24T11:53:48Z) - NPBG++: Accelerating Neural Point-Based Graphics [14.366073496519139]
NPBG++ is a novel view synthesis (NVS) task that achieves high rendering realism with low scene fitting time.
Our method efficiently leverages the multiview observations and the point cloud of a static scene to predict a neural descriptor for each point.
In our comparisons, the proposed system outperforms previous NVS approaches in terms of fitting and rendering runtimes while producing images of similar quality.
arXiv Detail & Related papers (2022-03-24T19:59:39Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - Single-View View Synthesis with Multiplane Images [64.46556656209769]
We apply deep learning to generate multiplane images given two or more input images at known viewpoints.
Our method learns to predict a multiplane image directly from a single image input.
It additionally generates reasonable depth maps and fills in content behind the edges of foreground objects in background layers.
arXiv Detail & Related papers (2020-04-23T17:59:19Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.