MoVi: A Large Multipurpose Motion and Video Dataset
- URL: http://arxiv.org/abs/2003.01888v1
- Date: Wed, 4 Mar 2020 04:43:03 GMT
- Title: MoVi: A Large Multipurpose Motion and Video Dataset
- Authors: Saeed Ghorbani, Kimia Mahdaviani, Anne Thaler, Konrad Kording, Douglas
James Cook, Gunnar Blohm, Nikolaus F. Troje
- Abstract summary: We introduce a new human Motion and Video dataset MoVi, which we make available publicly.
It contains 60 female and 30 male actors performing a collection of 20 predefined everyday actions and sports movements, and one self-chosen movement.
In total, our dataset contains 9 hours of motion capture data, 17 hours of video data from 4 different points of view, and 6.6 hours of IMU data.
- Score: 2.1473872586625298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human movements are both an area of intense study and the basis of many
applications such as character animation. For many applications, it is crucial
to identify movements from videos or analyze datasets of movements. Here we
introduce a new human Motion and Video dataset MoVi, which we make available
publicly. It contains 60 female and 30 male actors performing a collection of
20 predefined everyday actions and sports movements, and one self-chosen
movement. In five capture rounds, the same actors and movements were recorded
using different hardware systems, including an optical motion capture system,
video cameras, and inertial measurement units (IMU). For some of the capture
rounds, the actors were recorded when wearing natural clothing, for the other
rounds they wore minimal clothing. In total, our dataset contains 9 hours of
motion capture data, 17 hours of video data from 4 different points of view
(including one hand-held camera), and 6.6 hours of IMU data. In this paper, we
describe how the dataset was collected and post-processed; We present
state-of-the-art estimates of skeletal motions and full-body shape deformations
associated with skeletal motion. We discuss examples for potential studies this
dataset could enable.
Related papers
- Motion Prompting: Controlling Video Generation with Motion Trajectories [57.049252242807874]
We train a video generation model conditioned on sparse or dense video trajectories.
We translate high-level user requests into detailed, semi-dense motion prompts.
We demonstrate our approach through various applications, including camera and object motion control, "interacting" with an image, motion transfer, and image editing.
arXiv Detail & Related papers (2024-12-03T18:59:56Z) - Motion Modes: What Could Happen Next? [45.24111039863531]
Current video generation models often entangle object movement with camera motion and other scene changes.
We introduce Motion Modes, a training-free approach that explores a pre-trained image-to-video generator's latent distribution.
We achieve this by employing a flow generator guided by energy functions designed to disentangle object and camera motion.
arXiv Detail & Related papers (2024-11-29T01:51:08Z) - ViMo: Generating Motions from Casual Videos [34.19904765033005]
We propose a novel Video-to-Motion-Generation framework (ViMo)
ViMo could leverage the immense trove of untapped video content to produce abundant and diverse 3D human motions.
Striking experimental results demonstrate the proposed model could generate natural motions even for videos where rapid movements, varying perspectives, or frequent occlusions might exist.
arXiv Detail & Related papers (2024-08-13T03:57:35Z) - HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation [64.37874983401221]
We present HumanVid, the first large-scale high-quality dataset tailored for human image animation.
For the real-world data, we compile a vast collection of real-world videos from the internet.
For the synthetic data, we collected 10K 3D avatar assets and leveraged existing assets of body shapes, skin textures and clothings.
arXiv Detail & Related papers (2024-07-24T17:15:58Z) - Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild [66.34146236875822]
The Nymeria dataset is a large-scale, diverse, richly annotated human motion dataset collected in the wild with multiple multimodal egocentric devices.
It contains 1200 recordings of 300 hours of daily activities from 264 participants across 50 locations, travelling a total of 399Km.
The motion-language descriptions provide 310.5K sentences in 8.64M words from a vocabulary size of 6545.
arXiv Detail & Related papers (2024-06-14T10:23:53Z) - Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior [48.104051952928465]
Current learning-based motion synthesis methods depend on extensive motion datasets.
pose data is more accessible, since posed characters are easier to create and can even be extracted from images.
Our method generates plausible motions for characters that have only pose data by transferring motion from an existing motion capture dataset of another character.
arXiv Detail & Related papers (2023-10-31T08:13:00Z) - Physics-based Motion Retargeting from Sparse Inputs [73.94570049637717]
Commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose.
We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies.
We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available.
arXiv Detail & Related papers (2023-07-04T21:57:05Z) - 4DHumanOutfit: a multi-subject 4D dataset of human motion sequences in
varying outfits exhibiting large displacements [19.538122092286894]
4DHumanOutfit presents a new dataset of densely sampled-temporal 4D human data of different actors, outfits and motions.
The dataset can be seen as a cube of data containing 4D motion sequences along 3 axes with identity, outfit and motion.
This rich dataset has numerous potential applications for the processing and creation of digital humans.
arXiv Detail & Related papers (2023-06-12T19:59:27Z) - CIRCLE: Capture In Rich Contextual Environments [69.97976304918149]
We propose a novel motion acquisition system in which the actor perceives and operates in a highly contextual virtual world.
We present CIRCLE, a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes.
We use this dataset to train a model that generates human motion conditioned on scene information.
arXiv Detail & Related papers (2023-03-31T09:18:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.