Inter-View Depth Consistency Testing in Depth Difference Subspace
- URL: http://arxiv.org/abs/2301.11752v1
- Date: Fri, 27 Jan 2023 18:43:38 GMT
- Title: Inter-View Depth Consistency Testing in Depth Difference Subspace
- Authors: Pravin Kumar Rana and Markus Flierl
- Abstract summary: Multiview depth imagery will play a critical role in free-viewpoint television.
This paper proposes a method for depth consistency testing in depth difference subspace.
We also propose a view synthesis algorithm that uses the obtained consistency information to improve the visual quality of virtual views.
- Score: 6.205922305859478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiview depth imagery will play a critical role in free-viewpoint
television. This technology requires high quality virtual view synthesis to
enable viewers to move freely in a dynamic real world scene. Depth imagery at
different viewpoints is used to synthesize an arbitrary number of novel views.
Usually, depth images at multiple viewpoints are estimated individually by
stereo-matching algorithms, and hence, show lack of interview consistency. This
inconsistency affects the quality of view synthesis negatively. This paper
proposes a method for depth consistency testing in depth difference subspace to
enhance the depth representation of a scene across multiple viewpoints.
Furthermore, we propose a view synthesis algorithm that uses the obtained
consistency information to improve the visual quality of virtual views at
arbitrary viewpoints. Our method helps us to find a linear subspace for our
depth difference measurements in which we can test the inter-view consistency
efficiently. With this, our approach is able to enhance the depth information
for real world scenes. In combination with our consistency-adaptive view
synthesis, we improve the visual experience of the free-viewpoint user. The
experiments show that our approach enhances the objective quality of virtual
views by up to 1.4 dB. The advantage for the subjective quality is also
demonstrated.
Related papers
- Real-Time Position-Aware View Synthesis from Single-View Input [3.2873782624127834]
We present a lightweight, position-aware network designed for real-time view synthesis from a single input image and a target pose.
This work marks a step toward enabling real-time view synthesis from a single image for live and interactive applications.
arXiv Detail & Related papers (2024-12-18T16:20:21Z) - CMC: Few-shot Novel View Synthesis via Cross-view Multiplane Consistency [18.101763989542828]
We propose a simple yet effective method that explicitly builds depth-aware consistency across input views.
Our key insight is that by forcing the same spatial points to be sampled repeatedly in different input views, we are able to strengthen the interactions between views.
Although simple, extensive experiments demonstrate that our proposed method can achieve better synthesis quality over state-of-the-art methods.
arXiv Detail & Related papers (2024-02-26T09:04:04Z) - Calibrating Panoramic Depth Estimation for Practical Localization and
Mapping [20.621442016969976]
The absolute depth values of surrounding environments provide crucial cues for various assistive technologies, such as localization, navigation, and 3D structure estimation.
We propose that accurate depth estimated from panoramic images can serve as a powerful and light-weight input for a wide range of downstream tasks requiring 3D information.
arXiv Detail & Related papers (2023-08-27T04:50:05Z) - DINER: Depth-aware Image-based NEural Radiance fields [45.63488428831042]
We present Depth-aware Image-based NEural Radiance fields (DINER)
Given a sparse set of RGB input views, we predict depth and feature maps to guide the reconstruction of a scene representation.
We propose novel techniques to incorporate depth information into feature fusion and efficient scene sampling.
arXiv Detail & Related papers (2022-11-29T23:22:44Z) - HORIZON: High-Resolution Semantically Controlled Panorama Synthesis [105.55531244750019]
Panorama synthesis endeavors to craft captivating 360-degree visual landscapes, immersing users in the heart of virtual worlds.
Recent breakthroughs in visual synthesis have unlocked the potential for semantic control in 2D flat images, but a direct application of these methods to panorama synthesis yields distorted content.
We unveil an innovative framework for generating high-resolution panoramas, adeptly addressing the issues of spherical distortion and edge discontinuity through sophisticated spherical modeling.
arXiv Detail & Related papers (2022-10-10T09:43:26Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Self-Supervised Visibility Learning for Novel View Synthesis [79.53158728483375]
Conventional rendering methods estimate scene geometry and synthesize novel views in two separate steps.
We propose an end-to-end NVS framework to eliminate the error propagation issue.
Our network is trained in an end-to-end self-supervised fashion, thus significantly alleviating error accumulation in view synthesis.
arXiv Detail & Related papers (2021-03-29T08:11:25Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - Novel View Synthesis of Dynamic Scenes with Globally Coherent Depths
from a Monocular Camera [93.04135520894631]
This paper presents a new method to synthesize an image from arbitrary views and times given a collection of images of a dynamic scene.
A key challenge for the novel view synthesis arises from dynamic scene reconstruction where epipolar geometry does not apply to the local motion of dynamic contents.
To address this challenge, we propose to combine the depth from single view (DSV) and the depth from multi-view stereo (DMV), where DSV is complete, i.e., a depth is assigned to every pixel, yet view-variant in its scale, while DMV is view-invariant yet incomplete.
arXiv Detail & Related papers (2020-04-02T22:45:53Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.