SplatArmor: Articulated Gaussian splatting for animatable humans from
monocular RGB videos
- URL: http://arxiv.org/abs/2311.10812v1
- Date: Fri, 17 Nov 2023 18:47:07 GMT
- Title: SplatArmor: Articulated Gaussian splatting for animatable humans from
monocular RGB videos
- Authors: Rohit Jena, Ganesh Subramanian Iyer, Siddharth Choudhary, Brandon
Smith, Pratik Chaudhari, James Gee
- Abstract summary: We propose SplatArmor, a novel approach for recovering detailed and animatable human models by armoring' a parameterized body model with 3D Gaussians.
Our approach represents the human as a set of 3D Gaussians within a canonical space, whose articulation is defined by extending the skinning of the underlying SMPL geometry.
We show compelling results on the ZJU MoCap and People Snapshot datasets, which underscore the effectiveness of our method for controllable human synthesis.
- Score: 15.74530749823217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose SplatArmor, a novel approach for recovering detailed and
animatable human models by `armoring' a parameterized body model with 3D
Gaussians. Our approach represents the human as a set of 3D Gaussians within a
canonical space, whose articulation is defined by extending the skinning of the
underlying SMPL geometry to arbitrary locations in the canonical space. To
account for pose-dependent effects, we introduce a SE(3) field, which allows us
to capture both the location and anisotropy of the Gaussians. Furthermore, we
propose the use of a neural color field to provide color regularization and 3D
supervision for the precise positioning of these Gaussians. We show that
Gaussian splatting provides an interesting alternative to neural rendering
based methods by leverging a rasterization primitive without facing any of the
non-differentiability and optimization challenges typically faced in such
approaches. The rasterization paradigms allows us to leverage forward skinning,
and does not suffer from the ambiguities associated with inverse skinning and
warping. We show compelling results on the ZJU MoCap and People Snapshot
datasets, which underscore the effectiveness of our method for controllable
human synthesis.
Related papers
- G2SDF: Surface Reconstruction from Explicit Gaussians with Implicit SDFs [84.07233691641193]
We introduce G2SDF, a novel approach that integrates a neural implicit Signed Distance Field into the Gaussian Splatting framework.
G2SDF achieves superior quality than prior works while maintaining the efficiency of 3DGS.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - HFGaussian: Learning Generalizable Gaussian Human with Integrated Human Features [23.321087432786605]
We present a novel approach called HFGaussian that can estimate novel views and human features, such as the 3D skeleton, 3D key points, and dense pose, from sparse input images in real time at 25 FPS.
We thoroughly evaluate our HFGaussian method against the latest state-of-the-art techniques in human Gaussian splatting and pose estimation, demonstrating its real-time, state-of-the-art performance.
arXiv Detail & Related papers (2024-11-05T13:31:04Z) - No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images [100.80376573969045]
NoPoSplat is a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from multi-view images.
Our model achieves real-time 3D Gaussian reconstruction during inference.
This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios.
arXiv Detail & Related papers (2024-10-31T17:58:22Z) - EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings [11.248908608011941]
EVA-Gaussian is a real-time pipeline for 3D human novel view synthesis across diverse camera settings.
We introduce an Efficient cross-View Attention (EVA) module to accurately estimate the position of each 3D Gaussian from the source images.
We incorporate a powerful anchor loss function for both 3D Gaussian attributes and human face landmarks.
arXiv Detail & Related papers (2024-10-02T11:23:08Z) - Mipmap-GS: Let Gaussians Deform with Scale-specific Mipmap for Anti-aliasing Rendering [81.88246351984908]
We propose a unified optimization method to make Gaussians adaptive for arbitrary scales.
Inspired by the mipmap technique, we design pseudo ground-truth for the target scale and propose a scale-consistency guidance loss to inject scale information into 3D Gaussians.
Our method outperforms 3DGS in PSNR by an average of 9.25 dB for zoom-in and 10.40 dB for zoom-out.
arXiv Detail & Related papers (2024-08-12T16:49:22Z) - Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting [33.01987451251659]
3D Gaussian Splatting (3DGS) has emerged as a promising technique capable of real-time rendering with high-quality 3D reconstruction.
Despite its potential, 3DGS encounters challenges, including needle-like artifacts, suboptimal geometries, and inaccurate normals.
We introduce effective rank as a regularization, which constrains the structure of the Gaussians.
arXiv Detail & Related papers (2024-06-17T15:51:59Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions [37.50707388577952]
We present a novel animatable 3D Gaussian model for rendering high-fidelity free-view human motions in real time.
Compared to existing NeRF-based methods, the model owns better capability in high-frequency details without the jittering problem across video frames.
arXiv Detail & Related papers (2023-11-22T14:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.