Deception in Social Learning: A Multi-Agent Reinforcement Learning
Perspective
- URL: http://arxiv.org/abs/2106.05402v1
- Date: Wed, 9 Jun 2021 21:34:11 GMT
- Title: Deception in Social Learning: A Multi-Agent Reinforcement Learning
Perspective
- Authors: Paul Chelarescu
- Abstract summary: This research review introduces the problem statement, defines key concepts, critically evaluates existing evidence and addresses open problems that should be addressed in future research.
Within the framework of Multi-Agent Reinforcement Learning, Social Learning is a new class of algorithms that enables agents to reshape the reward function of other agents with the goal of promoting cooperation and achieving higher global rewards in mixed-motive games.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Within the framework of Multi-Agent Reinforcement Learning, Social Learning
is a new class of algorithms that enables agents to reshape the reward function
of other agents with the goal of promoting cooperation and achieving higher
global rewards in mixed-motive games. However, this new modification allows
agents unprecedented access to each other's learning process, which can
drastically increase the risk of manipulation when an agent does not realize it
is being deceived into adopting policies which are not actually in its own best
interest. This research review introduces the problem statement, defines key
concepts, critically evaluates existing evidence and addresses open problems
that should be addressed in future research.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents [2.1301560294088318]
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents.
We introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns.
We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning.
arXiv Detail & Related papers (2024-06-03T06:07:27Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Peer Learning: Learning Complex Policies in Groups from Scratch via Action Recommendations [16.073203911932872]
Peer learning is a novel high-level reinforcement learning framework for agents learning in groups.
We show that peer learning is able to outperform single agent learning and the baseline in several challenging OpenAI Gym domains.
arXiv Detail & Related papers (2023-12-15T17:01:35Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches [0.0]
We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
arXiv Detail & Related papers (2021-06-29T19:53:15Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Emergent Social Learning via Multi-agent Reinforcement Learning [91.57176641192771]
Social learning is a key component of human and animal intelligence.
This paper investigates whether independent reinforcement learning agents can learn to use social learning to improve their performance.
arXiv Detail & Related papers (2020-10-01T17:54:14Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Student/Teacher Advising through Reward Augmentation [0.0]
Transfer learning aims to help an agent learn about a problem by using knowledge that it has gained solving another problem.
I propose a method which allows the teacher/student framework to be applied in a way that fits directly and naturally into the more general reinforcement learning framework.
arXiv Detail & Related papers (2020-02-07T18:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.