Learning Proxemic Behavior Using Reinforcement Learning with Cognitive
Agents
- URL: http://arxiv.org/abs/2108.03730v1
- Date: Sun, 8 Aug 2021 20:45:34 GMT
- Title: Learning Proxemic Behavior Using Reinforcement Learning with Cognitive
Agents
- Authors: Cristian Mill\'an-Arias, Bruno Fernandes, Francisco Cruz
- Abstract summary: Proxemics is a branch of non-verbal communication concerned with studying the spatial behavior of people and animals.
We study how agents behave in environments based on proxemic behavior.
- Score: 1.0635883951034306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proxemics is a branch of non-verbal communication concerned with studying the
spatial behavior of people and animals. This behavior is an essential part of
the communication process due to delimit the acceptable distance to interact
with another being. With increasing research on human-agent interaction, new
alternatives are needed that allow optimal communication, avoiding agents
feeling uncomfortable. Several works consider proxemic behavior with cognitive
agents, where human-robot interaction techniques and machine learning are
implemented. However, environments consider fixed personal space and that the
agent previously knows it. In this work, we aim to study how agents behave in
environments based on proxemic behavior, and propose a modified gridworld to
that aim. This environment considers an issuer with proxemic behavior that
provides a disagreement signal to the agent. Our results show that the learning
agent can identify the proxemic space when the issuer gives feedback about
agent performance.
Related papers
- Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - The Frost Hollow Experiments: Pavlovian Signalling as a Path to
Coordination and Communication Between Agents [7.980685978549764]
This paper contributes a multi-faceted study into what we term Pavlovian signalling.
We establish Pavlovian signalling as a natural bridge between fixed signalling paradigms and fully adaptive communication learning.
Our results point to an actionable, constructivist path towards continual communication learning between reinforcement learning agents.
arXiv Detail & Related papers (2022-03-17T17:49:45Z) - Pavlovian Signalling with General Value Functions in Agent-Agent
Temporal Decision Making [6.704848594973921]
We study Pavlovian signalling -- a process by which learned, temporally extended predictions made by one agent inform decision-making by another agent.
As a main contribution, we establish Pavlovian signalling as a natural bridge between fixed signalling paradigms and fully adaptive communication learning between two agents.
arXiv Detail & Related papers (2022-01-11T00:14:04Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - Imitating Interactive Intelligence [24.95842455898523]
We study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
To build agents that can robustly interact with humans, we would ideally train them while they interact with humans.
We use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour.
arXiv Detail & Related papers (2020-12-10T13:55:47Z) - Investigating Human Response, Behaviour, and Preference in Joint-Task
Interaction [3.774610219328564]
We have designed an experiment in order to examine human behaviour and response as they interact with Explainable Planning (XAIP) agents.
We also present the results from an empirical analysis where we examined the behaviour of the two agents for simulated users.
arXiv Detail & Related papers (2020-11-27T22:16:59Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - SPA: Verbal Interactions between Agents and Avatars in Shared Virtual
Environments using Propositional Planning [61.335252950832256]
Sense-Plan-Ask, or SPA, generates plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments.
We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication.
arXiv Detail & Related papers (2020-02-08T23:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.