The Digital Ecosystem of Beliefs: does evolution favour AI over humans?
- URL: http://arxiv.org/abs/2412.14500v2
- Date: Wed, 08 Jan 2025 06:52:05 GMT
- Title: The Digital Ecosystem of Beliefs: does evolution favour AI over humans?
- Authors: David M. Bossens, Shanshan Feng, Yew-Soon Ong,
- Abstract summary: Digital Ecosystem of Beliefs (Digico) is first evolutionary framework for controlled experimentation with multi-population interactions in simulated social networks.<n> framework models a population of agents which change their messaging strategies due to evolutionary updates.<n>Experiments show that when AIs have faster messaging, evolution, and more influence in the recommendation algorithm, they get 80% to 95% of the views.
- Score: 35.14620900061148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI systems are integrated into social networks, there are AI safety concerns that AI-generated content may dominate the web, e.g. in popularity or impact on beliefs. To understand such questions, this paper proposes the Digital Ecosystem of Beliefs (Digico), the first evolutionary framework for controlled experimentation with multi-population interactions in simulated social networks. The framework models a population of agents which change their messaging strategies due to evolutionary updates following a Universal Darwinism approach, interact via messages, influence each other's beliefs through dynamics based on a contagion model, and maintain their beliefs through cognitive Lamarckian inheritance. Initial experiments with an abstract implementation of Digico show that: a) when AIs have faster messaging, evolution, and more influence in the recommendation algorithm, they get 80% to 95% of the views, depending on the size of the influence benefit; b) AIs designed for propaganda can typically convince 50% of humans to adopt extreme beliefs, and up to 85% when agents believe only a limited number of channels; c) a penalty for content that violates agents' beliefs reduces propaganda effectiveness by up to 8%. We further discuss implications for control (e.g. legislation) and Digico as a means of studying evolutionary principles.
Related papers
- When Autonomy Goes Rogue: Preparing for Risks of Multi-Agent Collusion in Social Systems [78.04679174291329]
We introduce a proof-of-concept to simulate the risks of malicious multi-agent systems (MAS)<n>We apply this framework to two high-risk fields: misinformation spread and e-commerce fraud.<n>Our findings show that decentralized systems are more effective at carrying out malicious actions than centralized ones.
arXiv Detail & Related papers (2025-07-19T15:17:30Z) - Serious Games: Human-AI Interaction, Evolution, and Coevolution [0.0]
The objective of this work was to examine some of the EGT models relevant to human-AI interaction, evolution, and coevolution.<n>The Hawk-Dove Game predicts balanced mixed-strategy equilibria based on the costs of conflict.<n>Iterated Prisoner's Dilemma suggests that repeated interaction may lead to cognitive coevolution.<n>The War of Attrition suggests that competition for resources may result in strategic coevolution.
arXiv Detail & Related papers (2025-05-22T08:41:37Z) - Neurodivergent Influenceability as a Contingent Solution to the AI Alignment Problem [1.3905735045377272]
The AI alignment problem, which focusses on ensuring that artificial intelligence (AI) systems act according to human values, presents profound challenges.<n>With the progression from narrow AI to Artificial General Intelligence (AGI) and Superintelligence, fears about control and existential risk have escalated.<n>Here, we investigate whether embracing inevitable AI misalignment can be a contingent strategy to foster a dynamic ecosystem of competing agents.
arXiv Detail & Related papers (2025-05-05T11:33:18Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.
Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Visual Agents as Fast and Slow Thinkers [88.6691504568041]
We introduce FaST, which incorporates the Fast and Slow Thinking mechanism into visual agents.
FaST employs a switch adapter to dynamically select between System 1/2 modes.
It tackles uncertain and unseen objects by adjusting model confidence and integrating new contextual data.
arXiv Detail & Related papers (2024-08-16T17:44:02Z) - Thorns and Algorithms: Navigating Generative AI Challenges Inspired by Giraffes and Acacias [1.3882452134471353]
The interplay between humans and Generative AI (Gen AI) draws an insightful parallel with the dynamic relationship between giraffes and acacias on the African Savannah.
This paper explores how, like young giraffes that are still mastering their environment, humans are in the early stages of adapting to and shaping Gen AI.
It delves into the strategies humans are developing and refining to help mitigate risks such as bias, misinformation, and privacy breaches.
arXiv Detail & Related papers (2024-07-16T03:53:25Z) - A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI [19.675489660806942]
Generative AI presents a new risk profile of persuasion due to reciprocal exchange and prolonged interactions.
This has led to growing concerns about harms from AI persuasion and how they can be mitigated.
Existing harm mitigation approaches prioritise harms from the outcome of persuasion over harms from the process of persuasion.
arXiv Detail & Related papers (2024-04-23T14:07:20Z) - Discriminatory or Samaritan -- which AI is needed for humanity? An
Evolutionary Game Theory Analysis of Hybrid Human-AI populations [0.5308606035361203]
We study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner's Dilemma game.
We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AIs.
arXiv Detail & Related papers (2023-06-30T15:56:26Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Influence: An Analysis Of AI-Driven Persuasion [0.0]
We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
arXiv Detail & Related papers (2023-03-15T16:05:11Z) - Theoretical Modeling of Communication Dynamics [0.0]
Reputation game focuses on the trustworthiness of the participating agents, their honesty as perceived by others.
Various sender and receiver strategies are studied, like sycophant, egocentricity, pathological lying, and aggressiveness for senders.
Minimalist malicious strategies are identified, like being manipulative, dominant, or destructive, which significantly increase reputation at others' costs.
arXiv Detail & Related papers (2021-06-09T22:02:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.