When to (or not to) trust intelligent machines: Insights from an
evolutionary game theory analysis of trust in repeated games
- URL: http://arxiv.org/abs/2007.11338v1
- Date: Wed, 22 Jul 2020 10:53:49 GMT
- Title: When to (or not to) trust intelligent machines: Insights from an
evolutionary game theory analysis of trust in repeated games
- Authors: The Anh Han, Cedric Perret and Simon T. Powers
- Abstract summary: We study the viability of trust-based strategies in repeated games.
These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating.
By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative.
- Score: 0.8701566919381222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The actions of intelligent agents, such as chatbots, recommender systems, and
virtual assistants are typically not fully transparent to the user.
Consequently, using such an agent involves the user exposing themselves to the
risk that the agent may act in a way opposed to the user's goals. It is often
argued that people use trust as a cognitive shortcut to reduce the complexity
of such interactions. Here we formalise this by using the methods of
evolutionary game theory to study the viability of trust-based strategies in
repeated games. These are reciprocal strategies that cooperate as long as the
other player is observed to be cooperating. Unlike classic reciprocal
strategies, once mutual cooperation has been observed for a threshold number of
rounds they stop checking their co-player's behaviour every round, and instead
only check with some probability. By doing so, they reduce the opportunity cost
of verifying whether the action of their co-player was actually cooperative. We
demonstrate that these trust-based strategies can outcompete strategies that
are always conditional, such as Tit-for-Tat, when the opportunity cost is
non-negligible. We argue that this cost is likely to be greater when the
interaction is between people and intelligent agents, because of the reduced
transparency of the agent. Consequently, we expect people to use trust-based
strategies more frequently in interactions with intelligent agents. Our results
provide new, important insights into the design of mechanisms for facilitating
interactions between humans and intelligent agents, where trust is an essential
factor.
Related papers
- Reciprocal Reward Influence Encourages Cooperation From Self-Interested Agents [2.1301560294088318]
Cooperation between self-interested individuals is a widespread phenomenon in the natural world, but remains elusive in interactions between artificially intelligent agents.
We introduce Reciprocators, reinforcement learning agents which are intrinsically motivated to reciprocate the influence of opponents' actions on their returns.
We show that Reciprocators can be used to promote cooperation in temporally extended social dilemmas during simultaneous learning.
arXiv Detail & Related papers (2024-06-03T06:07:27Z) - Fast Peer Adaptation with Context-aware Exploration [63.08444527039578]
We propose a peer identification reward for learning agents in multi-agent games.
This reward motivates the agent to learn a context-aware policy for effective exploration and fast adaptation.
We evaluate our method on diverse testbeds that involve competitive (Kuhn Poker), cooperative (PO-Overcooked), or mixed (Predator-Prey-W) games with peer agents.
arXiv Detail & Related papers (2024-02-04T13:02:27Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Game Theoretic Rating in N-player general-sum games with Equilibria [26.166859475522106]
We propose novel algorithms suitable for N-player, general-sum rating of strategies in normal-form games according to the payoff rating system.
This enables well-established solution concepts, such as equilibria, to be leveraged to efficiently rate strategies in games with complex strategic interactions.
arXiv Detail & Related papers (2022-10-05T12:33:03Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - Safe adaptation in multiagent competition [48.02377041620857]
In multiagent competitive scenarios, ego-agents may have to adapt to new opponents with previously unseen behaviors.
As the ego-agent updates its own behavior to exploit the opponent, its own behavior could become more exploitable.
We develop a safe adaptation approach in which the ego-agent is trained against a regularized opponent model.
arXiv Detail & Related papers (2022-03-14T23:53:59Z) - Cooperative Artificial Intelligence [0.0]
We argue that there is a need for research on the intersection between game theory and artificial intelligence.
We discuss the problem of how an external agent can promote cooperation between artificial learners.
We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off.
arXiv Detail & Related papers (2022-02-20T16:50:37Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z) - Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games [22.38765498549914]
We argue that a systematic study of many-player zero-sum games is a crucial element of artificial intelligence research.
Using symmetric zero-sum matrix games, we demonstrate formally that alliance formation may be seen as a social dilemma.
We show how reinforcement learning may be augmented with a peer-to-peer contract mechanism to discover and enforce alliances.
arXiv Detail & Related papers (2020-02-27T10:32:31Z) - Multi-Issue Bargaining With Deep Reinforcement Learning [0.0]
This paper evaluates the use of deep reinforcement learning in bargaining games.
Two actor-critic networks were trained for the bidding and acceptance strategy.
Neural agents learn to exploit time-based agents, achieving clear transitions in decision preference values.
They also demonstrate adaptive behavior against different combinations of concession, discount factors, and behavior-based strategies.
arXiv Detail & Related papers (2020-02-18T18:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.