Z2P: Instant Rendering of Point Clouds
- URL: http://arxiv.org/abs/2105.14548v1
- Date: Sun, 30 May 2021 13:58:24 GMT
- Title: Z2P: Instant Rendering of Point Clouds
- Authors: Gal Metzer, Rana Hanocka, Raja Giryes, Niloy J. Mitra, Daniel Cohen-Or
- Abstract summary: We present a technique for rendering point clouds using a neural network.
Existing point rendering techniques either use splatting, or first reconstruct a surface mesh that can then be rendered.
- Score: 104.1186026323896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a technique for rendering point clouds using a neural network.
Existing point rendering techniques either use splatting, or first reconstruct
a surface mesh that can then be rendered. Both of these techniques require
solving for global point normal orientation, which is a challenging problem on
its own. Furthermore, splatting techniques result in holes and overlaps,
whereas mesh reconstruction is particularly challenging, especially in the
cases of thin surfaces and sheets.
We cast the rendering problem as a conditional image-to-image translation
problem. In our formulation, Z2P, i.e., depth-augmented point features as
viewed from target camera view, are directly translated by a neural network to
rendered images, conditioned on control variables (e.g., color, light). We
avoid inevitable issues with splatting (i.e., holes and overlaps), and bypass
solving the notoriously challenging surface reconstruction problem or
estimating oriented normals. Yet, our approach results in a rendered image as
if a surface mesh was reconstructed. We demonstrate that our framework produces
a plausible image, and can effectively handle noise, non-uniform sampling, thin
surfaces / sheets, and is fast.
Related papers
- PFGS: High Fidelity Point Cloud Rendering via Feature Splatting [5.866747029417274]
We propose a novel framework to render high-quality images from sparse points.
This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering.
Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.
arXiv Detail & Related papers (2024-07-04T11:42:54Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering [6.142272540492937]
We present TRIPS (Trilinear Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality.
This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage.
arXiv Detail & Related papers (2024-01-11T16:06:36Z) - O$^2$-Recon: Completing 3D Reconstruction of Occluded Objects in the Scene with a Pre-trained 2D Diffusion Model [28.372289119872764]
Occlusion is a common issue in 3D reconstruction from RGB-D videos, often blocking the complete reconstruction of objects.
We propose a novel framework, empowered by a 2D diffusion-based in-painting model, to reconstruct complete surfaces for the hidden parts of objects.
arXiv Detail & Related papers (2023-08-18T14:38:31Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Deep Rectangling for Image Stitching: A Learning Baseline [57.76737888499145]
We build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes.
Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-03-08T03:34:10Z) - Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid [102.24539566851809]
Restoring reasonable and realistic content for arbitrary missing regions in images is an important yet challenging task.
Recent image inpainting models have made significant progress in generating vivid visual details, but they can still lead to texture blurring or structural distortions.
We propose the Semantic Pyramid Network (SPN) motivated by the idea that learning multi-scale semantic priors can greatly benefit the recovery of locally missing content in images.
arXiv Detail & Related papers (2021-12-08T04:33:33Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.