4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in
varying outfits exhibiting large displacements
- URL: http://arxiv.org/abs/2306.07399v1
- Date: Mon, 12 Jun 2023 19:59:27 GMT
- Title: 4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in
varying outfits exhibiting large displacements
- Authors: Matthieu Armando, Laurence Boissieux, Edmond Boyer, Jean-Sebastien
Franco, Martin Humenberger, Christophe Legras, Vincent Leroy, Mathieu Marsot,
Julien Pansiot, Sergi Pujades, Rim Rekik, Gregory Rogez, Anilkumar Swamy,
Stefanie Wuhrer
- Abstract summary: 4DHumanOutfit presents a new dataset of densely sampled-temporal 4D human data of different actors, outfits and motions.
The dataset can be seen as a cube of data containing 4D motion sequences along 3 axes with identity, outfit and motion.
This rich dataset has numerous potential applications for the processing and creation of digital humans.
- Score: 19.538122092286894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents 4DHumanOutfit, a new dataset of densely sampled
spatio-temporal 4D human motion data of different actors, outfits and motions.
The dataset is designed to contain different actors wearing different outfits
while performing different motions in each outfit. In this way, the dataset can
be seen as a cube of data containing 4D motion sequences along 3 axes with
identity, outfit and motion. This rich dataset has numerous potential
applications for the processing and creation of digital humans, e.g. augmented
reality, avatar creation and virtual try on. 4DHumanOutfit is released for
research purposes at https://kinovis.inria.fr/4dhumanoutfit/. In addition to
image data and 4D reconstructions, the dataset includes reference solutions for
each axis. We present independent baselines along each axis that demonstrate
the value of these reference solutions for evaluation tasks.
Related papers
- AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation [60.5897687447003]
AvatarGO is a novel framework designed to generate realistic 4D HOI scenes from textual inputs.
Our framework not only generates coherent compositional motions, but also exhibits greater robustness in handling issues.
As the first attempt to synthesize 4D avatars with object interactions, we hope AvatarGO could open new doors for human-centric 4D content creation.
arXiv Detail & Related papers (2024-10-09T17:58:56Z) - CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement [20.520938266527438]
We present CORE4D, a novel large-scale 4D human-object collaborative object rearrangement.
With 1K human-object-human motion sequences captured in the real world, we enrich CORE4D by contributing an iterative collaboration strategy to augment motions to a variety of novel objects.
Benefiting from extensive motion patterns provided by CORE4D, we benchmark two tasks aiming at generating human-object interaction: human-object motion forecasting and interaction synthesis.
arXiv Detail & Related papers (2024-06-27T17:32:18Z) - 4D-DRESS: A 4D Dataset of Real-world Human Clothing with Semantic Annotations [31.255745398128436]
We introduce 4D-DRESS, the first real-world 4D dataset advancing human clothing research.
4D-DRESS captures 64 outfits in 520 human motion sequences, amounting to 78k textured scans.
arXiv Detail & Related papers (2024-04-29T12:06:06Z) - CIRCLE: Capture In Rich Contextual Environments [69.97976304918149]
We propose a novel motion acquisition system in which the actor perceives and operates in a highly contextual virtual world.
We present CIRCLE, a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes.
We use this dataset to train a model that generates human motion conditioned on scene information.
arXiv Detail & Related papers (2023-03-31T09:18:12Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - BEHAVE: Dataset and Method for Tracking Human Object Interactions [105.77368488612704]
We present the first full body human- object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them.
We use this data to learn a model that can jointly track humans and objects in natural environments with an easy-to-use portable multi-camera setup.
arXiv Detail & Related papers (2022-04-14T13:21:19Z) - HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor
Space Using Wearable IMUs and LiDAR [51.9200422793806]
Using only body-mounted IMUs and LiDAR, HSC4D is space-free without any external devices' constraints and map-free without pre-built maps.
Relationships between humans and environments are also explored to make their interaction more realistic.
arXiv Detail & Related papers (2022-03-17T10:05:55Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - HUMAN4D: A Human-Centric Multimodal Dataset for Motions and Immersive
Media [16.711606354731533]
We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities captured simultaneously.
We provide benchmarking by HUMAN4D with state-of-the-art human pose estimation and 3D pose estimation methods.
arXiv Detail & Related papers (2021-10-14T09:03:35Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.