Go-Blend behavior and affect
- URL: http://arxiv.org/abs/2109.13388v1
- Date: Fri, 24 Sep 2021 17:04:30 GMT
- Title: Go-Blend behavior and affect
- Authors: Matthew Barthet, Antonios Liapis and Georgios N. Yannakakis
- Abstract summary: This paper proposes a paradigm shift for affective computing by viewing the affect modeling task as a reinforcement learning process.
In this initial study, we test our framework in an arcade game by training Go-Explore agents to both play optimally and attempt to mimic human demonstrations of arousal.
- Score: 2.323282558557423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a paradigm shift for affective computing by viewing the
affect modeling task as a reinforcement learning process. According to our
proposed framework the context (environment) and the actions of an agent define
the common representation that interweaves behavior and affect. To realise this
framework we build on recent advances in reinforcement learning and use a
modified version of the Go-Explore algorithm which has showcased supreme
performance in hard exploration tasks. In this initial study, we test our
framework in an arcade game by training Go-Explore agents to both play
optimally and attempt to mimic human demonstrations of arousal. We vary the
degree of importance between optimal play and arousal imitation and create
agents that can effectively display a palette of affect and behavioral
patterns. Our Go-Explore implementation not only introduces a new paradigm for
affect modeling; it empowers believable AI-based game testing by providing
agents that can blend and express a multitude of behavioral and affective
patterns.
Related papers
- External Model Motivated Agents: Reinforcement Learning for Enhanced Environment Sampling [3.536024441537599]
Unlike reinforcement learning (RL) agents, humans remain capable multitaskers in changing environments.
We propose an agent influence framework for RL agents to improve the adaptation efficiency of external models in changing environments.
Our results show that our method outperforms the baselines in terms of external model adaptation on metrics that measure both efficiency and performance.
arXiv Detail & Related papers (2024-06-28T23:31:22Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Play with Emotion: Affect-Driven Reinforcement Learning [3.611888922173257]
This paper introduces a paradigm shift by viewing the task of affect modeling as a reinforcement learning process.
We test our hypotheses in a racing game by training Go-Blend agents to model human demonstrations of arousal and behavior.
arXiv Detail & Related papers (2022-08-26T12:28:24Z) - Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions [51.71245032890532]
We propose methods enabling an agent acting upon the world to learn internal representations of sensory information consistent with actions that modify it.
In contrast to existing work, our approach does not require prior knowledge of the group and does not restrict the set of actions the agent can perform.
arXiv Detail & Related papers (2022-07-25T11:22:48Z) - Modelling Behaviour Change using Cognitive Agent Simulations [0.0]
This paper presents work-in-progress research to apply selected behaviour change theories to simulated agents.
The research is focusing on complex agent architectures required for self-determined goal achievement in adverse circumstances.
arXiv Detail & Related papers (2021-10-16T19:19:08Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Generative Adversarial Reward Learning for Generalized Behavior Tendency
Inference [71.11416263370823]
We propose a generative inverse reinforcement learning for user behavioral preference modelling.
Our model can automatically learn the rewards from user's actions based on discriminative actor-critic network and Wasserstein GAN.
arXiv Detail & Related papers (2021-05-03T13:14:25Z) - Learning to Represent Action Values as a Hypergraph on the Action
Vertices [17.811355496708728]
Action-value estimation is a critical component of reinforcement learning (RL) methods.
We conjecture that leveraging the structure of multi-dimensional action spaces is a key ingredient for learning good representations of action.
We show the effectiveness of our approach on a myriad of domains: illustrative prediction problems under minimal confounding effects, Atari 2600 games, and discretised physical control benchmarks.
arXiv Detail & Related papers (2020-10-28T00:19:13Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.