Recognition and Synthesis of Object Transport Motion
- URL: http://arxiv.org/abs/2009.12967v1
- Date: Sun, 27 Sep 2020 22:13:26 GMT
- Title: Recognition and Synthesis of Object Transport Motion
- Authors: Connor Daly
- Abstract summary: This project illustrates how deep convolutional networks can be used, alongside specialized data augmentation techniques, on a small motion capture dataset.
The project shows how these same augmentation techniques can be scaled up for use in the more complex task of motion synthesis.
By exploring recent developments in the concept of Generative Adversarial Models (GANs), specifically the Wasserstein GAN, this project outlines a model that is able to successfully generate lifelike object transportation motions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning typically requires vast numbers of training examples in order
to be used successfully. Conversely, motion capture data is often expensive to
generate, requiring specialist equipment, along with actors to generate the
prescribed motions, meaning that motion capture datasets tend to be relatively
small. Motion capture data does however provide a rich source of information
that is becoming increasingly useful in a wide variety of applications, from
gesture recognition in human-robot interaction, to data driven animation.
This project illustrates how deep convolutional networks can be used,
alongside specialized data augmentation techniques, on a small motion capture
dataset to learn detailed information from sequences of a specific type of
motion (object transport). The project shows how these same augmentation
techniques can be scaled up for use in the more complex task of motion
synthesis.
By exploring recent developments in the concept of Generative Adversarial
Models (GANs), specifically the Wasserstein GAN, this project outlines a model
that is able to successfully generate lifelike object transportation motions,
with the generated samples displaying varying styles and transport strategies.
Related papers
- IMUDiffusion: A Diffusion Model for Multivariate Time Series Synthetisation for Inertial Motion Capturing Systems [0.0]
We propose IMUDiffusion, a probabilistic diffusion model specifically designed for time series generation.
Our approach enables the generation of high-quality time series sequences which accurately capture the dynamics of human activities.
In some cases, we are able to improve the macro F1-score by almost 30%.
arXiv Detail & Related papers (2024-11-05T09:53:52Z) - Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models [70.78051873517285]
We present MotionBase, the first million-level motion generation benchmark.
By leveraging this vast dataset, our large motion model demonstrates strong performance across a broad range of motions.
We introduce a novel 2D lookup-free approach for motion tokenization, which preserves motion information and expands codebook capacity.
arXiv Detail & Related papers (2024-10-04T10:48:54Z) - Large Motion Model for Unified Multi-Modal Motion Generation [50.56268006354396]
Large Motion Model (LMM) is a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model.
LMM tackles these challenges from three principled aspects.
arXiv Detail & Related papers (2024-04-01T17:55:11Z) - TrackDiffusion: Tracklet-Conditioned Video Generation via Diffusion Models [75.20168902300166]
We propose TrackDiffusion, a novel video generation framework affording fine-grained trajectory-conditioned motion control.
A pivotal component of TrackDiffusion is the instance enhancer, which explicitly ensures inter-frame consistency of multiple objects.
generated video sequences by our TrackDiffusion can be used as training data for visual perception models.
arXiv Detail & Related papers (2023-12-01T15:24:38Z) - Multi-Scale Control Signal-Aware Transformer for Motion Synthesis
without Phase [72.01862340497314]
We propose a task-agnostic deep learning method, namely Multi-scale Control Signal-aware Transformer (MCS-T)
MCS-T is able to successfully generate motions comparable to those generated by the methods using auxiliary information.
arXiv Detail & Related papers (2023-03-03T02:56:44Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.