A Survey on Reinforcement Learning Methods in Character Animation
- URL: http://arxiv.org/abs/2203.04735v1
- Date: Mon, 7 Mar 2022 23:39:00 GMT
- Title: A Survey on Reinforcement Learning Methods in Character Animation
- Authors: Ariel Kwiatkowski, Eduardo Alvarado, Vicky Kalogeiton, C. Karen Liu,
Julien Pettr\'e, Michiel van de Panne, Marie-Paule Cani
- Abstract summary: Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions.
This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation.
- Score: 22.3342752080749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning is an area of Machine Learning focused on how agents
can be trained to make sequential decisions, and achieve a particular goal
within an arbitrary environment. While learning, they repeatedly take actions
based on their observation of the environment, and receive appropriate rewards
which define the objective. This experience is then used to progressively
improve the policy controlling the agent's behavior, typically represented by a
neural network. This trained module can then be reused for similar problems,
which makes this approach promising for the animation of autonomous, yet
reactive characters in simulators, video games or virtual reality environments.
This paper surveys the modern Deep Reinforcement Learning methods and discusses
their possible applications in Character Animation, from skeletal control of a
single, physically-based character to navigation controllers for individual
agents and virtual crowds. It also describes the practical side of training DRL
systems, comparing the different frameworks available to build such agents.
Related papers
- Learning of Generalizable and Interpretable Knowledge in Grid-Based
Reinforcement Learning Environments [5.217870815854702]
We propose using program synthesis to imitate reinforcement learning policies.
We adapt the state-of-the-art program synthesis system DreamCoder for learning concepts in grid-based environments.
arXiv Detail & Related papers (2023-09-07T11:46:57Z) - Adaptive Tracking of a Single-Rigid-Body Character in Various
Environments [2.048226951354646]
We propose a deep reinforcement learning method based on the simulation of a single-rigid-body character.
Using the centroidal dynamics model (CDM) to express the full-body character as a single rigid body (SRB) and training a policy to track a reference motion, we can obtain a policy capable of adapting to various unobserved environmental changes.
We demonstrate that our policy, efficiently trained within 30 minutes on an ultraportable laptop, has the ability to cope with environments that have not been experienced during learning.
arXiv Detail & Related papers (2023-08-14T22:58:54Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Action-Conditioned Contrastive Policy Pretraining [39.13710045468429]
Deep visuomotor policy learning achieves promising results in control tasks such as robotic manipulation and autonomous driving.
It requires a huge number of online interactions with the training environment, which limits its real-world application.
In this work, we aim to pretrain policy representations for driving tasks using hours-long uncurated YouTube videos.
arXiv Detail & Related papers (2022-04-05T17:58:22Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Data-Driven Reinforcement Learning for Virtual Character Animation
Control [0.0]
Social behaviours are challenging to design reward functions for, due to their lack of physical interaction with the world.
We propose RLAnimate, a novel data-driven deep RL approach to address this challenge.
We formalise a mathematical structure for training agents by refining the conceptual roles of elements such as agents, environments, states and actions.
An agent trained using our approach learns versatile animation dynamics to portray multiple behaviours, using an iterative RL training process.
arXiv Detail & Related papers (2021-04-13T17:05:27Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Deep Reinforcement Learning for High Level Character Control [0.9645196221785691]
We propose the use of traditional animations, behavior and reinforcement learning in the creation of intelligent characters for computational media.
The use case presented is a dog character with a high-level controller in a 3D environment which is built around the desired behaviors to be learned, such as fetching an item.
arXiv Detail & Related papers (2020-05-20T23:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.