Domain-independent generation and classification of behavior traces
- URL: http://arxiv.org/abs/2011.02918v1
- Date: Tue, 3 Nov 2020 16:58:54 GMT
- Title: Domain-independent generation and classification of behavior traces
- Authors: Daniel Borrajo, Manuela Veloso
- Abstract summary: CABBOT is a learning technique that allows the agent to perform on-line classification of the type of planning agent whose behavior is observing.
We present experiments in several (both financial and non-financial) domains with promising results.
- Score: 18.086782548507855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Financial institutions mostly deal with people. Therefore, characterizing
different kinds of human behavior can greatly help institutions for improving
their relation with customers and with regulatory offices. In many of such
interactions, humans have some internal goals, and execute some actions within
the financial system that lead them to achieve their goals. In this paper, we
tackle these tasks as a behavior-traces classification task. An observer agent
tries to learn characterizing other agents by observing their behavior when
taking actions in a given environment. The other agents can be of several types
and the goal of the observer is to identify the type of the other agent given a
trace of observations. We present CABBOT, a learning technique that allows the
agent to perform on-line classification of the type of planning agent whose
behavior is observing. In this work, the observer agent has partial and noisy
observability of the environment (state and actions of the other agents). In
order to evaluate the performance of the learning technique, we have generated
a domain-independent goal-based simulator of agents. We present experiments in
several (both financial and non-financial) domains with promising results.
Related papers
- Inverse Attention Agent for Multi-Agent System [6.196239958087161]
A major challenge for Multi-Agent Systems is enabling agents to adapt dynamically to diverse environments in which opponents and teammates may continually change.
We introduce Inverse Attention Agents that adopt concepts from the Theory of Mind, implemented algorithmically using an attention mechanism and trained in an end-to-end manner.
We demonstrate that the inverse attention network successfully infers the attention of other agents, and that this information improves agent performance.
arXiv Detail & Related papers (2024-10-29T06:59:11Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - Behavioral Analysis of Vision-and-Language Navigation Agents [21.31684388423088]
Vision-and-Language Navigation (VLN) agents must be able to ground instructions to actions based on surroundings.
We develop a methodology to study agent behavior on a skill-specific basis.
arXiv Detail & Related papers (2023-07-20T11:42:24Z) - Emergent Behaviors in Multi-Agent Target Acquisition [0.0]
We simulate a Multi-Agent System (MAS) using Reinforcement Learning (RL) in a pursuit-evasion game.
We create different adversarial scenarios by replacing RL-trained pursuers' policies with two distinct (non-RL) analytical strategies.
The novelty of our approach entails the creation of an influential feature set that reveals underlying data regularities.
arXiv Detail & Related papers (2022-12-15T15:20:58Z) - Diversifying Agent's Behaviors in Interactive Decision Models [11.125175635860169]
Modelling other agents' behaviors plays an important role in decision models for interactions among multiple agents.
In this article, we investigate into diversifying behaviors of other agents in the subject agent's decision model prior to their interactions.
arXiv Detail & Related papers (2022-03-06T23:05:00Z) - Explaining Reinforcement Learning Policies through Counterfactual
Trajectories [147.7246109100945]
A human developer must validate that an RL agent will perform well at test-time.
Our method conveys how the agent performs under distribution shifts by showing the agent's behavior across a wider trajectory distribution.
In a user study, we demonstrate that our method enables users to score better than baseline methods on one of two agent validation tasks.
arXiv Detail & Related papers (2022-01-29T00:52:37Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.