Deep Reinforcement Learning for High Level Character Control
- URL: http://arxiv.org/abs/2005.10391v1
- Date: Wed, 20 May 2020 23:32:19 GMT
- Title: Deep Reinforcement Learning for High Level Character Control
- Authors: Caio Souza and Luiz Velho
- Abstract summary: We propose the use of traditional animations, behavior and reinforcement learning in the creation of intelligent characters for computational media.
The use case presented is a dog character with a high-level controller in a 3D environment which is built around the desired behaviors to be learned, such as fetching an item.
- Score: 0.9645196221785691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose the use of traditional animations, heuristic
behavior and reinforcement learning in the creation of intelligent characters
for computational media. The traditional animation and heuristic gives artistic
control over the behavior while the reinforcement learning adds generalization.
The use case presented is a dog character with a high-level controller in a 3D
environment which is built around the desired behaviors to be learned, such as
fetching an item. As the development of the environment is the key for
learning, further analysis is conducted of how to build those learning
environments, the effects of environment and agent modeling choices, training
procedures and generalization of the learned behavior. This analysis builds
insight of the aforementioned factors and may serve as guide in the development
of environments in general.
Related papers
- What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Learning of Generalizable and Interpretable Knowledge in Grid-Based
Reinforcement Learning Environments [5.217870815854702]
We propose using program synthesis to imitate reinforcement learning policies.
We adapt the state-of-the-art program synthesis system DreamCoder for learning concepts in grid-based environments.
arXiv Detail & Related papers (2023-09-07T11:46:57Z) - Intrinsically Motivated Learning of Causal World Models [0.0]
A promising direction is to build world models capturing the true physical mechanisms hidden behind the sensorimotor interaction with the environment.
Inferring the causal structure of the environment could benefit from well-chosen actions as means to collect relevant interventional data.
arXiv Detail & Related papers (2022-08-09T16:48:28Z) - Developing hierarchical anticipations via neural network-based event
segmentation [14.059479351946386]
We model the development of hierarchical predictions via autonomously learned latent event codes.
We present a hierarchical recurrent neural network architecture, whose inductive learning biases foster the development of sparsely changing latent state.
A higher level network learns to predict the situations in which the latent states tend to change.
arXiv Detail & Related papers (2022-06-04T18:54:31Z) - A Survey on Reinforcement Learning Methods in Character Animation [22.3342752080749]
Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions.
This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation.
arXiv Detail & Related papers (2022-03-07T23:39:00Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.