Data-Driven Reinforcement Learning for Virtual Character Animation
Control
- URL: http://arxiv.org/abs/2104.06358v1
- Date: Tue, 13 Apr 2021 17:05:27 GMT
- Title: Data-Driven Reinforcement Learning for Virtual Character Animation
Control
- Authors: Vihanga Gamage, Cathy Ennis, Robert Ross
- Abstract summary: Social behaviours are challenging to design reward functions for, due to their lack of physical interaction with the world.
We propose RLAnimate, a novel data-driven deep RL approach to address this challenge.
We formalise a mathematical structure for training agents by refining the conceptual roles of elements such as agents, environments, states and actions.
An agent trained using our approach learns versatile animation dynamics to portray multiple behaviours, using an iterative RL training process.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual character animation control is a problem for which Reinforcement
Learning (RL) is a viable approach. While current work have applied RL
effectively to portray physics-based skills, social behaviours are challenging
to design reward functions for, due to their lack of physical interaction with
the world. On the other hand, data-driven implementations for these skills have
been limited to supervised learning methods which require extensive training
data and carry constraints on generalisability. In this paper, we propose
RLAnimate, a novel data-driven deep RL approach to address this challenge,
where we combine the strengths of RL together with an ability to learn from a
motion dataset when creating agents. We formalise a mathematical structure for
training agents by refining the conceptual roles of elements such as agents,
environments, states and actions, in a way that leverages attributes of the
character animation domain and model-based RL. An agent trained using our
approach learns versatile animation dynamics to portray multiple behaviours,
using an iterative RL training process, which becomes aware of valid behaviours
via representations learnt from motion capture clips. We demonstrate, by
training agents that portray realistic pointing and waving behaviours, that our
approach requires a significantly lower training time, and substantially fewer
sample episodes to be generated during training relative to state-of-the-art
physics-based RL methods. Also, compared to existing supervised learning-based
animation agents, RLAnimate needs a limited dataset of motion clips to generate
representations of valid behaviours during training.
Related papers
- SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation [55.47473138423572]
We introduce SuperPADL, a scalable framework for physics-based text-to-motion.
SuperPADL trains controllers on thousands of diverse motion clips using RL and supervised learning.
Our controller is trained on a dataset containing over 5000 skills and runs in real time on a consumer GPU.
arXiv Detail & Related papers (2024-07-15T07:07:11Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Adaptive action supervision in reinforcement learning from real-world
multi-agent demonstrations [10.174009792409928]
We propose a method for adaptive action supervision in RL from real-world demonstrations in multi-agent scenarios.
In the experiments, using chase-and-escape and football tasks with the different dynamics between the unknown source and target environments, we show that our approach achieved a balance between the generalization and the generalization ability compared with the baselines.
arXiv Detail & Related papers (2023-05-22T13:33:37Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - A Survey on Reinforcement Learning Methods in Character Animation [22.3342752080749]
Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions.
This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation.
arXiv Detail & Related papers (2022-03-07T23:39:00Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z) - AWAC: Accelerating Online Reinforcement Learning with Offline Datasets [84.94748183816547]
We show that our method, advantage weighted actor critic (AWAC), enables rapid learning of skills with a combination of prior demonstration data and online experience.
Our results show that incorporating prior data can reduce the time required to learn a range of robotic skills to practical time-scales.
arXiv Detail & Related papers (2020-06-16T17:54:41Z) - Towards Learning to Imitate from a Single Video Demonstration [11.15358253586118]
We develop a reinforcement learning agent that can learn to imitate given video observation.
We use a Siamese recurrent neural network architecture to learn rewards in space and time between motion clips.
We demonstrate our approach on simulated humanoid, dog, and raptor agents in 2D and a quadruped and a humanoid in 3D.
arXiv Detail & Related papers (2019-01-22T06:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.