Affective Movement Generation using Laban Effort and Shape and Hidden
Markov Models
- URL: http://arxiv.org/abs/2006.06071v1
- Date: Wed, 10 Jun 2020 21:24:26 GMT
- Title: Affective Movement Generation using Laban Effort and Shape and Hidden
Markov Models
- Authors: Ali Samadani, Rob Gorbet, Dana Kulic
- Abstract summary: This paper presents an approach for automatic affective movement generation that makes use of two movement abstractions: 1) Laban movement analysis (LMA), and 2) hidden Markov modeling.
The LMA provides a systematic tool for an abstract representation of the kinematic and expressive characteristics of movements.
An HMM abstraction of the identified movements is obtained and used with the desired motion path to generate a novel movement that conveys the target emotion.
The efficacy of the proposed approach in generating movements with recognizable target emotions is assessed using a validated automatic recognition model and a user study.
- Score: 6.181642248900806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Body movements are an important communication medium through which affective
states can be discerned. Movements that convey affect can also give machines
life-like attributes and help to create a more engaging human-machine
interaction. This paper presents an approach for automatic affective movement
generation that makes use of two movement abstractions: 1) Laban movement
analysis (LMA), and 2) hidden Markov modeling. The LMA provides a systematic
tool for an abstract representation of the kinematic and expressive
characteristics of movements. Given a desired motion path on which a target
emotion is to be overlaid, the proposed approach searches a labeled dataset in
the LMA Effort and Shape space for similar movements to the desired motion path
that convey the target emotion. An HMM abstraction of the identified movements
is obtained and used with the desired motion path to generate a novel movement
that is a modulated version of the desired motion path that conveys the target
emotion. The extent of modulation can be varied, trading-off between kinematic
and affective constraints in the generated movement. The proposed approach is
tested using a full-body movement dataset. The efficacy of the proposed
approach in generating movements with recognizable target emotions is assessed
using a validated automatic recognition model and a user study. The target
emotions were correctly recognized from the generated movements at a rate of
72% using the recognition model. Furthermore, participants in the user study
were able to correctly perceive the target emotions from a sample of generated
movements, although some cases of confusion were also observed.
Related papers
- MotionGPT-2: A General-Purpose Motion-Language Model for Motion Generation and Understanding [76.30210465222218]
MotionGPT-2 is a unified Large Motion-Language Model (LMLMLM)
It supports multimodal control conditions through pre-trained Large Language Models (LLMs)
It is highly adaptable to the challenging 3D holistic motion generation task.
arXiv Detail & Related papers (2024-10-29T05:25:34Z) - Semantics-aware Motion Retargeting with Vision-Language Models [19.53696208117539]
We present a novel Semantics-aware Motion reTargeting (SMT) method with the advantage of vision-language models to extract and maintain meaningful motion semantics.
We utilize a differentiable module to render 3D motions and the high-level motion semantics are incorporated into the motion process by feeding the vision-language model and aligning the extracted semantic embeddings.
To ensure the preservation of fine-grained motion details and high-level semantics, we adopt two-stage pipeline consisting of skeleton-aware pre-training and fine-tuning with semantics and geometry constraints.
arXiv Detail & Related papers (2023-12-04T15:23:49Z) - MoEmo Vision Transformer: Integrating Cross-Attention and Movement
Vectors in 3D Pose Estimation for HRI Emotion Detection [4.757210144179483]
We introduce MoEmo (Motion to Emotion), a cross-attention vision transformer (ViT) for human emotion detection within robotics systems.
We implement a cross-attention fusion model to combine movement vectors and environment contexts into a joint representation to derive emotion estimation.
We train the MoEmo system to jointly analyze motion and context, yielding emotion detection that outperforms the current state-of-the-art.
arXiv Detail & Related papers (2023-10-15T06:52:15Z) - Priority-Centric Human Motion Generation in Discrete Latent Space [59.401128190423535]
We introduce a Priority-Centric Motion Discrete Diffusion Model (M2DM) for text-to-motion generation.
M2DM incorporates a global self-attention mechanism and a regularization term to counteract code collapse.
We also present a motion discrete diffusion model that employs an innovative noise schedule, determined by the significance of each motion token.
arXiv Detail & Related papers (2023-08-28T10:40:16Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Human MotionFormer: Transferring Human Motions with Vision Transformers [73.48118882676276]
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis.
We propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching.
Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-02-22T11:42:44Z) - Motion Gait: Gait Recognition via Motion Excitation [5.559482051571756]
We propose Motion Excitation Module (MEM) to guide-temporal features to focus on human parts with large dynamic changes.
MEM learns the difference information between frames and intervals, so as to obtain the representation of changes temporal motion changes.
We present the Fine Feature Extractor (EFF), which independently learns according to the spatial-temporal representations of human horizontal parts of individuals.
arXiv Detail & Related papers (2022-06-22T13:47:14Z) - Property-Aware Robot Object Manipulation: a Generative Approach [57.70237375696411]
In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects.
We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object.
Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions.
arXiv Detail & Related papers (2021-06-08T14:15:36Z) - Segmentation and Classification of EMG Time-Series During Reach-to-Grasp
Motion [10.388787606334745]
We propose a framework for classifying EMG signals generated from continuous grasp movements with variations on dynamic arm/hand postures.
The proposed framework was evaluated in real-time with the accuracy variation over time presented.
arXiv Detail & Related papers (2021-04-19T20:41:06Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - Self-supervised Motion Learning from Static Images [36.85209332144106]
Motion from Static Images (MoSI) learns to encode motion information.
MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
We demonstrate that MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
arXiv Detail & Related papers (2021-04-01T03:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.