PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes
- URL: http://arxiv.org/abs/2106.00446v1
- Date: Tue, 1 Jun 2021 12:56:53 GMT
- Title: PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes
- Authors: V. Gkitsas, V. Sterzentsenko, N. Zioulis, G. Albanis, D. Zarpalas
- Abstract summary: Diminished Reality (DR) fulfills the requirement of such applications, to remove existing objects in the scene.
To preserve the reality' in indoor (re-)planning applications, the scene's structure preservation is crucial.
We propose a model that initially predicts the structure of an indoor scene and then uses it to guide the reconstruction of an empty -- background only -- representation of the same scene.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rising availability of commercial $360^\circ$ cameras that democratize
indoor scanning, has increased the interest for novel applications, such as
interior space re-design. Diminished Reality (DR) fulfills the requirement of
such applications, to remove existing objects in the scene, essentially
translating this to a counterfactual inpainting task. While recent advances in
data-driven inpainting have shown significant progress in generating realistic
samples, they are not constrained to produce results with reality mapped
structures. To preserve the `reality' in indoor (re-)planning applications, the
scene's structure preservation is crucial. To ensure structure-aware
counterfactual inpainting, we propose a model that initially predicts the
structure of an indoor scene and then uses it to guide the reconstruction of an
empty -- background only -- representation of the same scene. We train and
compare against other state-of-the-art methods on a version of the Structured3D
dataset modified for DR, showing superior results in both quantitative metrics
and qualitative results, but more interestingly, our approach exhibits a much
faster convergence rate. Code and models are available at
https://vcl3d.github.io/PanoDR/ .
Related papers
- Forest2Seq: Revitalizing Order Prior for Sequential Indoor Scene Synthesis [109.50718968215658]
We propose Forest2Seq, a framework that formulates indoor scene synthesis as an order-aware sequential learning problem.
By employing a clustering-based algorithm and a breadth-first, Forest2Seq derives meaningful orderings and utilizes a transformer to generate realistic 3D scenes autoregressively.
arXiv Detail & Related papers (2024-07-07T14:32:53Z) - Mixed Diffusion for 3D Indoor Scene Synthesis [55.94569112629208]
We present MiDiffusion, a novel mixed discrete-continuous diffusion model architecture.
We represent a scene layout by a 2D floor plan and a set of objects, each defined by its category, location, size, and orientation.
Our experimental results demonstrate that MiDiffusion substantially outperforms state-of-the-art autoregressive and diffusion models in floor-conditioned 3D scene synthesis.
arXiv Detail & Related papers (2024-05-31T17:54:52Z) - Windowed-FourierMixer: Enhancing Clutter-Free Room Modeling with Fourier
Transform [3.864321514889099]
Inpainting indoor environments from a single image plays a crucial role in modeling the internal structure of interior spaces.
We propose an innovative approach based on a U-Former architecture and a new Windowed-FourierMixer block.
This new architecture proves advantageous for tasks involving indoor scenes where symmetry is prevalent.
arXiv Detail & Related papers (2024-02-28T12:27:28Z) - DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality [12.84124441493612]
Diminished reality (DR) refers to the removal of real objects from the environment by virtually replacing them with their background.
Recent deep learning-based inpainting is promising, but the DR use case is complicated by the need to generate coherent structure and 3D geometry.
In this paper, we propose a first RGB-D inpainting framework fulfilling all requirements of DR: Plausible image and geometry inpainting with coherent structure, running at real-time frame rates, with minimal temporal artifacts.
arXiv Detail & Related papers (2023-12-01T12:12:58Z) - Visual Localization using Imperfect 3D Models from the Internet [54.731309449883284]
This paper studies how imperfections in 3D models affect localization accuracy.
We show that 3D models from the Internet show promise as an easy-to-obtain scene representation.
arXiv Detail & Related papers (2023-04-12T16:15:05Z) - Floorplan-Aware Camera Poses Refinement [2.294014185517203]
We argue that a floorplan is a useful source of spatial information, which can guide a 3D model optimization.
We propose a novel optimization algorithm expanding conventional BA that leverages the prior knowledge about the scene structure in the form of a floorplan.
Our experiments on the Redwood dataset and our self-captured data demonstrate that utilizing floorplan improves accuracy of 3D reconstructions.
arXiv Detail & Related papers (2022-10-10T11:24:10Z) - Towards High-Fidelity Single-view Holistic Reconstruction of Indoor
Scenes [50.317223783035075]
We present a new framework to reconstruct holistic 3D indoor scenes from single-view images.
We propose an instance-aligned implicit function (InstPIFu) for detailed object reconstruction.
Our code and model will be made publicly available.
arXiv Detail & Related papers (2022-07-18T14:54:57Z) - NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors [84.66706400428303]
We propose a new method, named NeuRIS, for high quality reconstruction of indoor scenes.
NeuRIS integrates estimated normal of indoor scenes as a prior in a neural rendering framework.
Experiments show that NeuRIS significantly outperforms the state-of-the-art methods in terms of reconstruction quality.
arXiv Detail & Related papers (2022-06-27T19:22:03Z) - NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric
Mapping [29.3378360000956]
We present a novel 3D mapping method leveraging the recent progress in neural implicit representation for 3D reconstruction.
We propose a fusion strategy and training pipeline to incrementally build and update neural implicit representations.
We show that incrementally built occupancy maps can be obtained in real-time even on a CPU.
arXiv Detail & Related papers (2021-10-18T15:45:05Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.