HSPACE: Synthetic Parametric Humans Animated in Complex Environments
- URL: http://arxiv.org/abs/2112.12867v1
- Date: Thu, 23 Dec 2021 22:27:55 GMT
- Title: HSPACE: Synthetic Parametric Humans Animated in Complex Environments
- Authors: Eduard Gabriel Bazavan, Andrei Zanfir, Mihai Zanfir, William T.
Freeman, Rahul Sukthankar, Cristian Sminchisescu
- Abstract summary: We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
- Score: 67.8628917474705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in the state of the art for 3d human sensing are currently limited
by the lack of visual datasets with 3d ground truth, including multiple people,
in motion, operating in real-world environments, with complex illumination or
occlusion, and potentially observed by a moving camera. Sophisticated scene
understanding would require estimating human pose and shape as well as
gestures, towards representations that ultimately combine useful metric and
behavioral signals with free-viewpoint photo-realistic visualisation
capabilities. To sustain progress, we build a large-scale photo-realistic
dataset, Human-SPACE (HSPACE), of animated humans placed in complex synthetic
indoor and outdoor environments. We combine a hundred diverse individuals of
varying ages, gender, proportions, and ethnicity, with hundreds of motions and
scenes, as well as parametric variations in body shape (for a total of 1,600
different humans), in order to generate an initial dataset of over 1 million
frames. Human animations are obtained by fitting an expressive human body
model, GHUM, to single scans of people, followed by novel re-targeting and
positioning procedures that support the realistic animation of dressed humans,
statistical variation of body proportions, and jointly consistent scene
placement of multiple moving people. Assets are generated automatically, at
scale, and are compatible with existing real time rendering and game engines.
The dataset with evaluation server will be made available for research. Our
large-scale analysis of the impact of synthetic data, in connection with real
data and weak supervision, underlines the considerable potential for continuing
quality improvements and limiting the sim-to-real gap, in this practical
setting, in connection with increased model capacity.
Related papers
- Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - Object Motion Guided Human Motion Synthesis [22.08240141115053]
We study the problem of full-body human motion synthesis for the manipulation of large-sized objects.
We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework.
We develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated.
arXiv Detail & Related papers (2023-09-28T08:22:00Z) - CIRCLE: Capture In Rich Contextual Environments [69.97976304918149]
We propose a novel motion acquisition system in which the actor perceives and operates in a highly contextual virtual world.
We present CIRCLE, a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes.
We use this dataset to train a model that generates human motion conditioned on scene information.
arXiv Detail & Related papers (2023-03-31T09:18:12Z) - Embodied Scene-aware Human Pose Estimation [25.094152307452]
We propose embodied scene-aware human pose estimation.
Our method is one stage, causal, and recovers global 3D human poses in a simulated environment.
arXiv Detail & Related papers (2022-06-18T03:50:19Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z) - Learning Compositional Radiance Fields of Dynamic Human Heads [13.272666180264485]
We propose a novel compositional 3D representation that combines the best of previous methods to produce both higher-resolution and faster results.
Differentiable volume rendering is employed to compute photo-realistic novel views of the human head and upper body.
Our approach achieves state-of-the-art results for synthesizing novel views of dynamic human heads and the upper body.
arXiv Detail & Related papers (2020-12-17T22:19:27Z) - PLACE: Proximity Learning of Articulation and Contact in 3D Environments [70.50782687884839]
We propose a novel interaction generation method, named PLACE, which explicitly models the proximity between the human body and the 3D scene around it.
Our perceptual study shows that PLACE significantly improves the state-of-the-art method, approaching the realism of real human-scene interaction.
arXiv Detail & Related papers (2020-08-12T21:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.