Semantic View Synthesis
- URL: http://arxiv.org/abs/2008.10598v1
- Date: Mon, 24 Aug 2020 17:59:46 GMT
- Title: Semantic View Synthesis
- Authors: Hsin-Ping Huang, Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang
- Abstract summary: We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
- Score: 56.47999473206778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We tackle a new problem of semantic view synthesis -- generating
free-viewpoint rendering of a synthesized scene using a semantic label map as
input. We build upon recent advances in semantic image synthesis and view
synthesis for handling photographic image content generation and view
extrapolation. Direct application of existing image/view synthesis methods,
however, results in severe ghosting/blurry artifacts. To address the drawbacks,
we propose a two-step approach. First, we focus on synthesizing the color and
depth of the visible surface of the 3D scene. We then use the synthesized color
and depth to impose explicit constraints on the multiple-plane image (MPI)
representation prediction process. Our method produces sharp contents at the
original view and geometrically consistent renderings across novel viewpoints.
The experiments on numerous indoor and outdoor images show favorable results
against several strong baselines and validate the effectiveness of our
approach.
Related papers
- GSNeRF: Generalizable Semantic Neural Radiance Fields with Enhanced 3D
Scene Understanding [30.951440204237166]
We introduce a Generalizable Semantic Neural Radiance Field (GSNeRF), which takes image semantics into the synthesis process.
Our GSNeRF is composed of two stages: Semantic Geo-Reasoning and Depth-Guided Visual rendering.
arXiv Detail & Related papers (2024-03-06T10:55:50Z) - ReShader: View-Dependent Highlights for Single Image View-Synthesis [5.736642774848791]
We propose to split the view synthesis process into two independent tasks of pixel reshading and relocation.
During the reshading process, we take the single image as the input and adjust its shading based on the novel camera.
This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image.
arXiv Detail & Related papers (2023-09-19T15:23:52Z) - SAMPLING: Scene-adaptive Hierarchical Multiplane Images Representation
for Novel View Synthesis from a Single Image [60.52991173059486]
We introduce SAMPLING, a Scene-adaptive Hierarchical Multiplane Images Representation for Novel View Synthesis from a Single Image.
Our method demonstrates considerable performance gains in large-scale unbounded outdoor scenes using a single image on the KITTI dataset.
arXiv Detail & Related papers (2023-09-12T15:33:09Z) - Survey on Controlable Image Synthesis with Deep Learning [15.29961293132048]
We present a survey of some recent works on 3D controllable image synthesis using deep learning.
We first introduce the datasets and evaluation indicators for 3D controllable image synthesis.
The photometrically controllable image synthesis approaches are also reviewed for 3D re-lighting researches.
arXiv Detail & Related papers (2023-07-18T07:02:51Z) - HORIZON: High-Resolution Semantically Controlled Panorama Synthesis [105.55531244750019]
Panorama synthesis endeavors to craft captivating 360-degree visual landscapes, immersing users in the heart of virtual worlds.
Recent breakthroughs in visual synthesis have unlocked the potential for semantic control in 2D flat images, but a direct application of these methods to panorama synthesis yields distorted content.
We unveil an innovative framework for generating high-resolution panoramas, adeptly addressing the issues of spherical distortion and edge discontinuity through sophisticated spherical modeling.
arXiv Detail & Related papers (2022-10-10T09:43:26Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z) - Learned Spatial Representations for Few-shot Talking-Head Synthesis [68.3787368024951]
We propose a novel approach for few-shot talking-head synthesis.
We show that this disentangled representation leads to a significant improvement over previous methods.
arXiv Detail & Related papers (2021-04-29T17:59:42Z) - Generative View Synthesis: From Single-view Semantics to Novel-view
Images [38.7873192939574]
Generative View Synthesis (GVS) can synthesize multiple photorealistic views of a scene given a single semantic map.
We first lift the input 2D semantic map onto a 3D layered representation of the scene in feature space.
We then project the layered features onto the target views to generate the final novel-view images.
arXiv Detail & Related papers (2020-08-20T17:48:16Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.