Responsible Emergent Multi-Agent Behavior
- URL: http://arxiv.org/abs/2311.01609v1
- Date: Thu, 2 Nov 2023 21:37:32 GMT
- Title: Responsible Emergent Multi-Agent Behavior
- Authors: Niko A. Grupen
- Abstract summary: State of the art in Responsible AI has ignored one crucial point: human problems are multi-agent problems.
From driving in traffic to negotiating economic policy, human problem-solving involves interaction and the interplay of the actions and motives of multiple individuals.
This dissertation develops the study of responsible emergent multi-agent behavior.
- Score: 2.9370710299422607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Responsible AI has risen to the forefront of the AI research community. As
neural network-based learning algorithms continue to permeate real-world
applications, the field of Responsible AI has played a large role in ensuring
that such systems maintain a high-level of human-compatibility. Despite this
progress, the state of the art in Responsible AI has ignored one crucial point:
human problems are multi-agent problems. Predominant approaches largely
consider the performance of a single AI system in isolation, but human problems
are, by their very nature, multi-agent. From driving in traffic to negotiating
economic policy, human problem-solving involves interaction and the interplay
of the actions and motives of multiple individuals.
This dissertation develops the study of responsible emergent multi-agent
behavior, illustrating how researchers and practitioners can better understand
and shape multi-agent learning with respect to three pillars of Responsible AI:
interpretability, fairness, and robustness. First, I investigate multi-agent
interpretability, presenting novel techniques for understanding emergent
multi-agent behavior at multiple levels of granularity. With respect to
low-level interpretability, I examine the extent to which implicit
communication emerges as an aid to coordination in multi-agent populations. I
introduce a novel curriculum-driven method for learning high-performing
policies in difficult, sparse reward environments and show through a measure of
position-based social influence that multi-agent teams that learn sophisticated
coordination strategies exchange significantly more information through
implicit signals than lesser-coordinated agents. Then, at a high-level, I study
concept-based interpretability in the context of multi-agent learning. I
propose a novel method for learning intrinsically interpretable, concept-based
policies and show that it enables...
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Mathematics of multi-agent learning systems at the interface of game
theory and artificial intelligence [0.8049333067399385]
Evolutionary Game Theory and Artificial Intelligence are two fields that, at first glance, might seem distinct, but they have notable connections and intersections.
The former focuses on the evolution of behaviors (or strategies) in a population, where individuals interact with others and update their strategies based on imitation (or social learning)
The latter, meanwhile, is centered on machine learning algorithms and (deep) neural networks.
arXiv Detail & Related papers (2024-03-09T17:36:54Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Neural Amortized Inference for Nested Multi-agent Reasoning [54.39127942041582]
We propose a novel approach to bridge the gap between human-like inference capabilities and computational limitations.
We evaluate our method in two challenging multi-agent interaction domains.
arXiv Detail & Related papers (2023-08-21T22:40:36Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches [0.0]
We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
arXiv Detail & Related papers (2021-06-29T19:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.