MAAIP: Multi-Agent Adversarial Interaction Priors for imitation from
fighting demonstrations for physics-based characters
- URL: http://arxiv.org/abs/2311.02502v1
- Date: Sat, 4 Nov 2023 20:40:39 GMT
- Title: MAAIP: Multi-Agent Adversarial Interaction Priors for imitation from
fighting demonstrations for physics-based characters
- Authors: Mohamed Younes, Ewa Kijak, Richard Kulpa, Simon Malinowski, Franck
Multon
- Abstract summary: We propose a novel Multi-Agent Generative Adversarial Imitation Learning based approach.
Our system trains control policies allowing each character to imitate the interactive skills associated with each actor.
This approach has been tested on two different fighting styles, boxing and full-body martial art, to demonstrate the ability of the method to imitate different styles.
- Score: 5.303375034962503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulating realistic interaction and motions for physics-based characters is
of great interest for interactive applications, and automatic secondary
character animation in the movie and video game industries. Recent works in
reinforcement learning have proposed impressive results for single character
simulation, especially the ones that use imitation learning based techniques.
However, imitating multiple characters interactions and motions requires to
also model their interactions. In this paper, we propose a novel Multi-Agent
Generative Adversarial Imitation Learning based approach that generalizes the
idea of motion imitation for one character to deal with both the interaction
and the motions of the multiple physics-based characters. Two unstructured
datasets are given as inputs: 1) a single-actor dataset containing motions of a
single actor performing a set of motions linked to a specific application, and
2) an interaction dataset containing a few examples of interactions between
multiple actors. Based on these datasets, our system trains control policies
allowing each character to imitate the interactive skills associated with each
actor, while preserving the intrinsic style. This approach has been tested on
two different fighting styles, boxing and full-body martial art, to demonstrate
the ability of the method to imitate different styles.
Related papers
- Versatile Motion Language Models for Multi-Turn Interactive Agents [28.736843383405603]
We introduce Versatile Interactive Motion language model, which integrates both language and motion modalities.
We evaluate the versatility of our method across motion-related tasks, motion to text, text to motion, reaction generation, motion editing, and reasoning about motion sequences.
arXiv Detail & Related papers (2024-10-08T02:23:53Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - Synthesizing Physical Character-Scene Interactions [64.26035523518846]
It is necessary to synthesize such interactions between virtual characters and their surroundings.
We present a system that uses adversarial imitation learning and reinforcement learning to train physically-simulated characters.
Our approach takes physics-based character motion generation a step closer to broad applicability.
arXiv Detail & Related papers (2023-02-02T05:21:32Z) - Interaction Mix and Match: Synthesizing Close Interaction using
Conditional Hierarchical GAN with Multi-Hot Class Embedding [4.864897201841002]
We propose a novel way to create realistic human reactive motions by mixing and matching different types of close interactions.
Experiments are conducted both noisy (depth-based) and high-quality (versa-based) interaction datasets.
arXiv Detail & Related papers (2022-07-23T16:13:10Z) - Dual-AI: Dual-path Actor Interaction Learning for Group Activity
Recognition [103.62363658053557]
We propose a Dual-path Actor Interaction (DualAI) framework, which flexibly arranges spatial and temporal transformers.
We also introduce a novel Multi-scale Actor Contrastive Loss (MAC-Loss) between two interactive paths of Dual-AI.
Our Dual-AI can boost group activity recognition by fusing distinct discriminative features of different actors.
arXiv Detail & Related papers (2022-04-05T12:17:40Z) - Triangular Character Animation Sampling with Motion, Emotion, and
Relation [78.80083186208712]
We present a novel framework to sample and synthesize animations by associating the characters' body motions, facial expressions, and social relations.
Our method can provide animators with an automatic way to generate 3D character animations, help synthesize interactions between Non-Player Characters (NPCs) and enhance machine emotion intelligence in virtual reality (VR)
arXiv Detail & Related papers (2022-03-09T18:19:03Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.