Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas
- URL: http://arxiv.org/abs/2402.17270v2
- Date: Tue, 30 Jul 2024 12:21:17 GMT
- Title: Multi-Agent, Human-Agent and Beyond: A Survey on Cooperation in Social Dilemmas
- Authors: Chunjiang Mu, Hao Guo, Yang Chen, Chen Shen, Shuyue Hu, Zhen Wang,
- Abstract summary: The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines.
Recent advancements in Artificial Intelligence have significantly reshaped this field.
This survey examines three key areas at the intersection of AI and cooperation in social dilemmas.
- Score: 15.785674974107204
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The study of cooperation within social dilemmas has long been a fundamental topic across various disciplines, including computer science and social science. Recent advancements in Artificial Intelligence (AI) have significantly reshaped this field, offering fresh insights into understanding and enhancing cooperation. This survey examines three key areas at the intersection of AI and cooperation in social dilemmas. First, focusing on multi-agent cooperation, we review the intrinsic and external motivations that support cooperation among rational agents, and the methods employed to develop effective strategies against diverse opponents. Second, looking into human-agent cooperation, we discuss the current AI algorithms for cooperating with humans and the human biases towards AI agents. Third, we review the emergent field of leveraging AI agents to enhance cooperation among humans. We conclude by discussing future research avenues, such as using large language models, establishing unified theoretical frameworks, revisiting existing theories of human cooperation, and exploring multiple real-world applications.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAI-Co2, a novel human-AI co-construction framework.
We formalize HAI-Co2 and discuss the difficult open research problems that it faces.
We present a case study of HAI-Co2 and demonstrate its efficacy compared to monolithic generative AI models.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - CREW: Facilitating Human-AI Teaming Research [3.7324091969140776]
We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines.
It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design.
CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines.
arXiv Detail & Related papers (2024-07-31T21:43:55Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - A Review of Cooperation in Multi-agent Learning [5.334450724000142]
Cooperation in multi-agent learning (MAL) is a topic at the intersection of numerous disciplines.
This paper provides an overview of the fundamental concepts, problem settings and algorithms of multi-agent learning.
arXiv Detail & Related papers (2023-12-08T16:42:15Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI
Coordination [52.991211077362586]
We propose a policy ensemble method to increase the diversity of partners in the population.
We then develop a context-aware method enabling the ego agent to analyze and identify the partner's potential policy primitives.
In this way, the ego agent is able to learn more universal cooperative behaviors for collaborating with diverse partners.
arXiv Detail & Related papers (2023-01-16T12:14:58Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - Open Problems in Cooperative AI [21.303564222227727]
Research aims to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems.
This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms.
arXiv Detail & Related papers (2020-12-15T21:39:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.