TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
- URL: http://arxiv.org/abs/2401.06003v2
- Date: Tue, 26 Mar 2024 16:30:20 GMT
- Title: TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
- Authors: Linus Franke, Darius Rückert, Laura Fink, Marc Stamminger,
- Abstract summary: We present TRIPS (Trilinear Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP.
Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality.
This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage.
- Score: 6.142272540492937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency. However, also latest approaches in this domain are not without their shortcomings. 3D Gaussian Splatting [Kerbl and Kopanas et al. 2023] struggles when tasked with rendering highly detailed scenes, due to blurring and cloudy artifacts. On the other hand, ADOP [R\"uckert et al. 2022] can accommodate crisper images, but the neural reconstruction network decreases performance, it grapples with temporal instability and it is unable to effectively address large gaps in the point cloud. In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP. The fundamental concept behind our novel technique involves rasterizing points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This approach allows rendering arbitrarily large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image including detail beyond splat resolution. Importantly, our render pipeline is entirely differentiable, allowing for automatic optimization of both point sizes and positions. Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/trips/
Related papers
- EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - Low Latency Point Cloud Rendering with Learned Splatting [24.553459204476432]
High-quality rendering of point clouds is challenging because of the point sparsity and irregularity.
Existing rendering solutions lack in either quality or speed.
We present a framework that unlocks interactive, free-viewing and high-fidelity point cloud rendering.
arXiv Detail & Related papers (2024-09-24T23:26:07Z) - PFGS: High Fidelity Point Cloud Rendering via Feature Splatting [5.866747029417274]
We propose a novel framework to render high-quality images from sparse points.
This method first attempts to bridge the 3D Gaussian Splatting and point cloud rendering.
Experiments on different benchmarks show the superiority of our method in terms of rendering qualities and the necessities of our main components.
arXiv Detail & Related papers (2024-07-04T11:42:54Z) - Volumetric Rendering with Baked Quadrature Fields [34.280932843055446]
We propose a novel representation for non-opaque scenes that enables fast inference by utilizing textured polygons.
Our method allows an easy integration with existing graphics frameworks allowing rendering speed of over 100 frames-per-second for a $1920times1080$ image.
arXiv Detail & Related papers (2023-12-02T20:45:18Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - TriVol: Point Cloud Rendering via Triple Volumes [57.305748806545026]
We present a dense while lightweight 3D representation, named TriVol, that can be combined with NeRF to render photo-realistic images from point clouds.
Our framework has excellent generalization ability to render a category of scenes/objects without fine-tuning.
arXiv Detail & Related papers (2023-03-29T06:34:12Z) - Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance
Fields [63.21420081888606]
Recent Radiance Fields and extensions are proposed to synthesize realistic images from 2D input.
We present Point2Pix as a novel point to link the 3D sparse point clouds with 2D dense image pixels.
arXiv Detail & Related papers (2023-03-29T06:26:55Z) - View Synthesis with Sculpted Neural Points [64.40344086212279]
Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency.
We propose a new approach that performs view synthesis using point clouds.
It is the first point-based method to achieve better visual quality than NeRF while being more than 100x faster in rendering speed.
arXiv Detail & Related papers (2022-05-12T03:54:35Z) - Z2P: Instant Rendering of Point Clouds [104.1186026323896]
We present a technique for rendering point clouds using a neural network.
Existing point rendering techniques either use splatting, or first reconstruct a surface mesh that can then be rendered.
arXiv Detail & Related papers (2021-05-30T13:58:24Z) - DeRF: Decomposed Radiance Fields [30.784481193893345]
In this paper, we propose a technique based on spatial decomposition capable of mitigating this issue.
We show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm.
Our experiments show that for real-world scenes, our method provides up to 3x more efficient inference than NeRF.
arXiv Detail & Related papers (2020-11-25T02:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.