GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
- URL: http://arxiv.org/abs/2312.11458v3
- Date: Wed, 15 Jan 2025 22:17:24 GMT
- Title: GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
- Authors: Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao,
- Abstract summary: Implicit deformable representations commonly model motion with a canonical space and time-dependent deformation field.<n>GauFRe, uses a forward-warping deformation to explicitly model non-rigid transformations of scene geometry.<n>Experiments show our method achieves competitive results and higher efficiency than previous state-of-the-art NeRF and Gaussian-based methods.
- Score: 16.733855781461802
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a method that achieves state-of-the-art rendering quality and efficiency on monocular dynamic scene reconstruction using deformable 3D Gaussians. Implicit deformable representations commonly model motion with a canonical space and time-dependent backward-warping deformation field. Our method, GauFRe, uses a forward-warping deformation to explicitly model non-rigid transformations of scene geometry. Specifically, we propose a template set of 3D Gaussians residing in a canonical space, and a time-dependent forward-warping deformation field to model dynamic objects. Additionally, we tailor a 3D Gaussian-specific static component supported by an inductive bias-aware initialization approach which allows the deformation field to focus on moving scene regions, improving the rendering of complex real-world motion. The differentiable pipeline is optimized end-to-end with a self-supervised rendering loss. Experiments show our method achieves competitive results and higher efficiency than both previous state-of-the-art NeRF and Gaussian-based methods. For real-world scenes, GauFRe can train in ~20 mins and offer 96 FPS real-time rendering on an RTX 3090 GPU. Project website: https://lynl7130.github.io/gaufre/index.html
Related papers
- EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.
We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis [3.4341938551046227]
Differentiable rendering methods made significant progress in novel view synthesis.
We provide a consistent formulation of the emitted radiance c and density sigma for differentiable ray casting of irregularly distributed Gaussians.
We achieve superior quality rendering compared to the state-of-the-art while maintaining reasonable training times and achieving inference speeds of 25 FPS on the Blender dataset.
arXiv Detail & Related papers (2024-08-06T10:59:58Z) - Gaussian Splatting LK [0.11249583407496218]
This paper investigates the potential of regularizing the native warp field within the dynamic Gaussian Splatting framework.
We show that we can exploit knowledge innate to the forward warp field network to derive an analytical velocity field.
This derived Lucas-Kanade style analytical regularization enables our method to achieve superior performance in reconstructing highly dynamic scenes.
arXiv Detail & Related papers (2024-07-16T01:50:43Z) - Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos [58.22272760132996]
We show that existing 4D Gaussian methods dramatically fail in this setup because the monocular setting is underconstrained.
We propose Dynamic Gaussian Marbles, which consist of three core modifications that target the difficulties of the monocular setting.
We evaluate on the Nvidia Dynamic Scenes dataset and the DyCheck iPhone dataset, and show that Gaussian Marbles significantly outperforms other Gaussian baselines in quality.
arXiv Detail & Related papers (2024-06-26T19:37:07Z) - Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - Mesh-based Gaussian Splatting for Real-time Large-scale Deformation [58.18290393082119]
It is challenging for users to directly deform or manipulate implicit representations with large deformations in the real-time fashion.
We develop a novel GS-based method that enables interactive deformation.
Our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate.
arXiv Detail & Related papers (2024-02-07T12:36:54Z) - GaussianStyle: Gaussian Head Avatar via StyleGAN [64.85782838199427]
We propose a novel framework that integrates the volumetric strengths of 3DGS with the powerful implicit representation of StyleGAN.
We show that our method achieves state-of-the-art performance in reenactment, novel view synthesis, and animation.
arXiv Detail & Related papers (2024-02-01T18:14:42Z) - SWinGS: Sliding Windows for Dynamic 3D Gaussian Splatting [7.553079256251747]
We extend 3D Gaussian Splatting to reconstruct dynamic scenes.
We produce high-quality renderings of general dynamic scenes with competitive quantitative performance.
Our method can be viewed in real-time in our dynamic interactive viewer.
arXiv Detail & Related papers (2023-12-20T03:54:03Z) - Gaussian-Flow: 4D Reconstruction with Dynamic 3D Gaussian Particle [9.082693946898733]
We introduce a novel point-based approach for fast dynamic scene reconstruction and real-time rendering from both multi-view and monocular videos.
In contrast to the prevalent NeRF-based approaches hampered by slow training and rendering speeds, our approach harnesses recent advancements in point-based 3D Gaussian Splatting (3DGS)
Our proposed approach showcases a substantial efficiency improvement, achieving a $5times$ faster training speed compared to the per-frame 3DGS modeling.
arXiv Detail & Related papers (2023-12-06T11:25:52Z) - GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting [51.96353586773191]
We introduce textbfGS-SLAM that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping system.
Our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering.
Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets.
arXiv Detail & Related papers (2023-11-20T12:08:23Z) - Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene
Reconstruction [29.83056271799794]
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering.
We propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space.
Through a differential Gaussianizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed.
arXiv Detail & Related papers (2023-09-22T16:04:02Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.