Flexible Techniques for Differentiable Rendering with 3D Gaussians
- URL: http://arxiv.org/abs/2308.14737v1
- Date: Mon, 28 Aug 2023 17:38:31 GMT
- Title: Flexible Techniques for Differentiable Rendering with 3D Gaussians
- Authors: Leonid Keselman, Martial Hebert
- Abstract summary: Neural Radiance Fields demonstrated photorealistic novel view is within reach, but was gated by performance requirements for fast reconstruction of real scenes and objects.
We develop extensions to alternative shape representations, in particular, 3D watertight meshes and rendering per-ray normals.
These reconstructions are quick, robust, and easily performed on GPU or CPU.
- Score: 29.602516169951556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fast, reliable shape reconstruction is an essential ingredient in many
computer vision applications. Neural Radiance Fields demonstrated that
photorealistic novel view synthesis is within reach, but was gated by
performance requirements for fast reconstruction of real scenes and objects.
Several recent approaches have built on alternative shape representations, in
particular, 3D Gaussians. We develop extensions to these renderers, such as
integrating differentiable optical flow, exporting watertight meshes and
rendering per-ray normals. Additionally, we show how two of the recent methods
are interoperable with each other. These reconstructions are quick, robust, and
easily performed on GPU or CPU. For code and visual examples, see
https://leonidk.github.io/fmb-plus
Related papers
- 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - LumiGauss: High-Fidelity Outdoor Relighting with 2D Gaussian Splatting [15.11759492990967]
We introduce LumiGauss, a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting.
Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps.
arXiv Detail & Related papers (2024-08-06T23:41:57Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - LaRa: Efficient Large-Baseline Radiance Fields [32.86296116177701]
We propose a method that unifies local and global reasoning in transformer layers, resulting in improved quality and faster convergence.
Our model represents scenes as Gaussian Volumes and combines this with an image encoder and Group Attention Layers for efficient feed-forward reconstruction.
arXiv Detail & Related papers (2024-07-05T17:59:58Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - Approximate Differentiable Rendering with Algebraic Surfaces [24.7500811470085]
Fuzzy Metaballs is an approximate differentiable for a compact, interpretable representation.
Our approximate focuses on rendering shapes via depth maps and silhouettes.
Compared to mesh-based differentiables, our method has forward passes that are 5x faster and backwards passes that are 30x faster.
arXiv Detail & Related papers (2022-07-21T16:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.