4D-DRESS: A 4D Dataset of Real-world Human Clothing with Semantic Annotations
- URL: http://arxiv.org/abs/2404.18630v1
- Date: Mon, 29 Apr 2024 12:06:06 GMT
- Title: 4D-DRESS: A 4D Dataset of Real-world Human Clothing with Semantic Annotations
- Authors: Wenbo Wang, Hsuan-I Ho, Chen Guo, Boxiang Rong, Artur Grigorev, Jie Song, Juan Jose Zarate, Otmar Hilliges,
- Abstract summary: We introduce 4D-DRESS, the first real-world 4D dataset advancing human clothing research.
4D-DRESS captures 64 outfits in 520 human motion sequences, amounting to 78k textured scans.
- Score: 31.255745398128436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The studies of human clothing for digital avatars have predominantly relied on synthetic datasets. While easy to collect, synthetic data often fall short in realism and fail to capture authentic clothing dynamics. Addressing this gap, we introduce 4D-DRESS, the first real-world 4D dataset advancing human clothing research with its high-quality 4D textured scans and garment meshes. 4D-DRESS captures 64 outfits in 520 human motion sequences, amounting to 78k textured scans. Creating a real-world clothing dataset is challenging, particularly in annotating and segmenting the extensive and complex 4D human scans. To address this, we develop a semi-automatic 4D human parsing pipeline. We efficiently combine a human-in-the-loop process with automation to accurately label 4D scans in diverse garments and body movements. Leveraging precise annotations and high-quality garment meshes, we establish several benchmarks for clothing simulation and reconstruction. 4D-DRESS offers realistic and challenging data that complements synthetic sources, paving the way for advancements in research of lifelike human clothing. Website: https://ait.ethz.ch/4d-dress.
Related papers
- GenXD: Generating Any 3D and 4D Scenes [137.5455092319533]
We propose to jointly investigate general 3D and 4D generation by leveraging camera and object movements commonly observed in daily life.
By leveraging all the 3D and 4D data, we develop our framework, GenXD, which allows us to produce any 3D or 4D scene.
arXiv Detail & Related papers (2024-11-04T17:45:44Z) - CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement [20.520938266527438]
We present CORE4D, a novel large-scale 4D human-object collaborative object rearrangement.
With 1K human-object-human motion sequences captured in the real world, we enrich CORE4D by contributing an iterative collaboration strategy to augment motions to a variety of novel objects.
Benefiting from extensive motion patterns provided by CORE4D, we benchmark two tasks aiming at generating human-object interaction: human-object motion forecasting and interaction synthesis.
arXiv Detail & Related papers (2024-06-27T17:32:18Z) - 4D Panoptic Scene Graph Generation [102.22082008976228]
We introduce 4D Panoptic Scene Graph (PSG-4D), a new representation that bridges the raw visual data perceived in a dynamic 4D world and high-level visual understanding.
Specifically, PSG-4D abstracts rich 4D sensory data into nodes, which represent entities with precise location and status information, and edges, which capture the temporal relations.
We propose PSG4DFormer, a Transformer-based model that can predict panoptic segmentation masks, track masks along the time axis, and generate the corresponding scene graphs.
arXiv Detail & Related papers (2024-05-16T17:56:55Z) - BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike
Animated Motion [52.11972919802401]
We show that neural networks trained only on synthetic data achieve state-of-the-art accuracy on the problem of 3D human pose and shape estimation from real images.
Previous synthetic datasets have been small, unrealistic, or lacked realistic clothing.
arXiv Detail & Related papers (2023-06-29T13:35:16Z) - 4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in
varying outfits exhibiting large displacements [19.538122092286894]
4DHumanOutfit presents a new dataset of densely sampled-temporal 4D human data of different actors, outfits and motions.
The dataset can be seen as a cube of data containing 4D motion sequences along 3 axes with identity, outfit and motion.
This rich dataset has numerous potential applications for the processing and creation of digital humans.
arXiv Detail & Related papers (2023-06-12T19:59:27Z) - Hi4D: 4D Instance Segmentation of Close Human Interaction [32.51930800738743]
Hi4D is a dataset of 4D textured scans of 20 subject pairs, 100 sequences, and a total of more than 11K frames.
This dataset contains rich interaction-centric annotations in 2D and 3D alongside accurately registered parametric body models.
arXiv Detail & Related papers (2023-03-27T16:53:09Z) - HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor
Space Using Wearable IMUs and LiDAR [51.9200422793806]
Using only body-mounted IMUs and LiDAR, HSC4D is space-free without any external devices' constraints and map-free without pre-built maps.
Relationships between humans and environments are also explored to make their interaction more realistic.
arXiv Detail & Related papers (2022-03-17T10:05:55Z) - HSPACE: Synthetic Parametric Humans Animated in Complex Environments [67.8628917474705]
We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
arXiv Detail & Related papers (2021-12-23T22:27:55Z) - HUMAN4D: A Human-Centric Multimodal Dataset for Motions and Immersive
Media [16.711606354731533]
We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities captured simultaneously.
We provide benchmarking by HUMAN4D with state-of-the-art human pose estimation and 3D pose estimation methods.
arXiv Detail & Related papers (2021-10-14T09:03:35Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.