Animating NeRFs from Texture Space: A Framework for Pose-Dependent
Rendering of Human Performances
- URL: http://arxiv.org/abs/2311.03140v1
- Date: Mon, 6 Nov 2023 14:34:36 GMT
- Title: Animating NeRFs from Texture Space: A Framework for Pose-Dependent
Rendering of Human Performances
- Authors: Paul Knoll, Wieland Morgenstern, Anna Hilsmann and Peter Eisert
- Abstract summary: We introduce a novel NeRF-based framework for pose-dependent rendering of human performances.
Our approach results in high-quality renderings for novel-view and novel-pose synthesis.
- Score: 11.604386285817302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating high-quality controllable 3D human models from multi-view RGB videos
poses a significant challenge. Neural radiance fields (NeRFs) have demonstrated
remarkable quality in reconstructing and free-viewpoint rendering of static as
well as dynamic scenes. The extension to a controllable synthesis of dynamic
human performances poses an exciting research question. In this paper, we
introduce a novel NeRF-based framework for pose-dependent rendering of human
performances. In our approach, the radiance field is warped around an SMPL body
mesh, thereby creating a new surface-aligned representation. Our representation
can be animated through skeletal joint parameters that are provided to the NeRF
in addition to the viewpoint for pose dependent appearances. To achieve this,
our representation includes the corresponding 2D UV coordinates on the mesh
texture map and the distance between the query point and the mesh. To enable
efficient learning despite mapping ambiguities and random visual variations, we
introduce a novel remapping process that refines the mapped coordinates.
Experiments demonstrate that our approach results in high-quality renderings
for novel-view and novel-pose synthesis.
Related papers
- UV Gaussians: Joint Learning of Mesh Deformation and Gaussian Textures for Human Avatar Modeling [71.87807614875497]
We propose UV Gaussians, which models the 3D human body by jointly learning mesh deformations and 2D UV-space Gaussian textures.
We collect and process a new dataset of human motion, which includes multi-view images, scanned models, parametric model registration, and corresponding texture maps. Experimental results demonstrate that our method achieves state-of-the-art synthesis of novel view and novel pose.
arXiv Detail & Related papers (2024-03-18T09:03:56Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Template-free Articulated Neural Point Clouds for Reposable View
Synthesis [11.535440791891217]
We present a novel method to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video.
Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses.
arXiv Detail & Related papers (2023-05-30T14:28:08Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Human Performance Modeling and Rendering via Neural Animated Mesh [40.25449482006199]
We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
arXiv Detail & Related papers (2022-09-18T03:58:00Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - 3D-aware Image Synthesis via Learning Structural and Textural
Representations [39.681030539374994]
We propose VolumeGAN, for high-fidelity 3D-aware image synthesis, through explicitly learning a structural representation and a textural representation.
Our approach achieves sufficiently higher image quality and better 3D control than the previous methods.
arXiv Detail & Related papers (2021-12-20T18:59:40Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in
Motion with Neural Rendering [9.600908665766465]
We present STaR, a novel method that performs Self-supervised Tracking and Reconstruction of dynamic scenes with rigid motion from multi-view RGB videos without any manual annotation.
We show that our method can render photorealistic novel views, where novelty is measured on both spatial and temporal axes.
arXiv Detail & Related papers (2020-12-22T23:45:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.