Theoretical Modeling of Communication Dynamics
- URL: http://arxiv.org/abs/2106.05414v1
- Date: Wed, 9 Jun 2021 22:02:19 GMT
- Title: Theoretical Modeling of Communication Dynamics
- Authors: Torsten En{\ss}lin, Viktoria Kainz, C\'eline B{\oe}hm
- Abstract summary: Reputation game focuses on the trustworthiness of the participating agents, their honesty as perceived by others.
Various sender and receiver strategies are studied, like sycophant, egocentricity, pathological lying, and aggressiveness for senders.
Minimalist malicious strategies are identified, like being manipulative, dominant, or destructive, which significantly increase reputation at others' costs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication is a cornerstone of social interactions, be it with human or
artificial intelligence (AI). Yet it can be harmful, depending on the honesty
of the exchanged information. To study this, an agent based sociological
simulation framework is presented, the reputation game. This illustrates the
impact of different communication strategies on the agents' reputation. The
game focuses on the trustworthiness of the participating agents, their honesty
as perceived by others. In the game, each agent exchanges statements with the
others about their own and each other's honesty, which lets their judgments
evolve. Various sender and receiver strategies are studied, like sycophant,
egocentricity, pathological lying, and aggressiveness for senders as well as
awareness and lack thereof for receivers. Minimalist malicious strategies are
identified, like being manipulative, dominant, or destructive, which
significantly increase reputation at others' costs. Phenomena such as echo
chambers, self-deception, deception symbiosis, clique formation, freezing of
group opinions emerge from the dynamics. This indicates that the reputation
game can be studied for complex group phenomena, to test behavioral hypothesis,
and to analyze AI influenced social media. With refined rules it may help to
understand social interactions, and to safeguard the design of non-abusive AI
systems.
Related papers
- Should agentic conversational AI change how we think about ethics? Characterising an interactional ethics centred on respect [0.12041807591122715]
We propose an interactional approach to ethics that is centred on relational and situational factors.
Our work anticipates a set of largely unexplored risks at the level of situated social interaction.
arXiv Detail & Related papers (2024-01-17T09:44:03Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Capturing Humans' Mental Models of AI: An Item Response Theory Approach [12.129622383429597]
We show that people expect AI agents' performance to be significantly better on average than the performance of other humans.
Our results indicate that people expect AI agents' performance to be significantly better on average than the performance of other humans.
arXiv Detail & Related papers (2023-05-15T23:17:26Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - Learning Triadic Belief Dynamics in Nonverbal Communication from Videos [81.42305032083716]
Nonverbal communication can convey rich social information among agents.
In this paper, we incorporate different nonverbal communication cues to represent, model, learn, and infer agents' mental states.
arXiv Detail & Related papers (2021-04-07T00:52:04Z) - Artificial intelligence in communication impacts language and social
relationships [11.212791488179757]
We study the social consequences of one of the most pervasive AI applications: algorithmic response suggestions ("smart replies")
We find that using algorithmic responses increases communication efficiency, use of positive emotional language, and positive evaluations by communication partners.
However, consistent with common assumptions about the negative implications of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses.
arXiv Detail & Related papers (2021-02-10T22:05:11Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This study focuses on providing a novel learning mechanism based on a rivalry social impact.
Based on the concept of competitive rivalry, our analysis aims to investigate if we can change the assessment of these agents from a human perspective.
arXiv Detail & Related papers (2020-11-02T21:54:18Z) - When to (or not to) trust intelligent machines: Insights from an
evolutionary game theory analysis of trust in repeated games [0.8701566919381222]
We study the viability of trust-based strategies in repeated games.
These are reciprocal strategies that cooperate as long as the other player is observed to be cooperating.
By doing so, they reduce the opportunity cost of verifying whether the action of their co-player was actually cooperative.
arXiv Detail & Related papers (2020-07-22T10:53:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.