VET: Visual Error Tomography for Point Cloud Completion and High-Quality
Neural Rendering
- URL: http://arxiv.org/abs/2311.04634v1
- Date: Wed, 8 Nov 2023 12:23:57 GMT
- Title: VET: Visual Error Tomography for Point Cloud Completion and High-Quality
Neural Rendering
- Authors: Linus Franke, Darius R\"uckert, Laura Fink, Matthias Innmann, Marc
Stamminger
- Abstract summary: We present a novel neural-rendering-based approach to detect and fix deficiencies in novel view synthesis.
We show that our approach can improve the quality of a point cloud obtained by structure from motion.
In contrast to point growing techniques, the approach can also fix large-scale holes and missing thin structures effectively.
- Score: 4.542331789204584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last few years, deep neural networks opened the doors for big advances
in novel view synthesis. Many of these approaches are based on a (coarse) proxy
geometry obtained by structure from motion algorithms. Small deficiencies in
this proxy can be fixed by neural rendering, but larger holes or missing parts,
as they commonly appear for thin structures or for glossy regions, still lead
to distracting artifacts and temporal instability. In this paper, we present a
novel neural-rendering-based approach to detect and fix such deficiencies. As a
proxy, we use a point cloud, which allows us to easily remove outlier geometry
and to fill in missing geometry without complicated topological operations.
Keys to our approach are (i) a differentiable, blending point-based renderer
that can blend out redundant points, as well as (ii) the concept of Visual
Error Tomography (VET), which allows us to lift 2D error maps to identify
3D-regions lacking geometry and to spawn novel points accordingly. Furthermore,
(iii) by adding points as nested environment maps, our approach allows us to
generate high-quality renderings of the surroundings in the same pipeline. In
our results, we show that our approach can improve the quality of a point cloud
obtained by structure from motion and thus increase novel view synthesis
quality significantly. In contrast to point growing techniques, the approach
can also fix large-scale holes and missing thin structures effectively.
Rendering quality outperforms state-of-the-art methods and temporal stability
is significantly improved, while rendering is possible at real-time frame
rates.
Related papers
- CE-NPBG: Connectivity Enhanced Neural Point-Based Graphics for Novel View Synthesis in Autonomous Driving Scenes [5.719388462440881]
We present CE-NPBG, a new approach for novel view synthesis (NVS) in large-scale autonomous driving scenes.
Our method is a neural point-based technique that leverages two modalities: posed images (cameras) and synchronized raw 3D point clouds (LiDAR)
By leveraging this connectivity, our method significantly improves rendering quality and enhances run-time and scalability.
arXiv Detail & Related papers (2025-04-28T08:02:02Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - NeuManifold: Neural Watertight Manifold Reconstruction with Efficient
and High-Quality Rendering Support [45.68296352822415]
We present a method for generating high-quality watertight manifold meshes from multi-view input images.
Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
arXiv Detail & Related papers (2023-05-26T17:59:21Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Improving neural implicit surfaces geometry with patch warping [12.106051690920266]
We argue that this comes from the difficulty to learn and render high frequency textures with neural networks.
We propose to add to the standard neural rendering optimization a direct photo-consistency term across the different views.
We evaluate our approach, dubbed NeuralWarp, on the standard DTU and EPFL benchmarks and show it outperforms state of the art unsupervised implicit surfaces reconstructions by over 20% on both datasets.
arXiv Detail & Related papers (2021-12-17T17:43:50Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.