Group Selection as a Safeguard Against AI Substitution
- URL: http://arxiv.org/abs/2602.03541v1
- Date: Tue, 03 Feb 2026 13:56:47 GMT
- Title: Group Selection as a Safeguard Against AI Substitution
- Authors: Qiankun Zhong, Thomas F. Eisenmann, Julian Garcia, Iyad Rahwan,
- Abstract summary: Reliance on generative AI can reduce cultural variance and diversity, especially in creative work.<n>This reduction in variance has already led to problems in model performance, including model collapse and hallucination.<n>Using an agent-based model and evolutionary game theory, we compare two types of AI use: complement and substitute.
- Score: 0.28029990367346164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliance on generative AI can reduce cultural variance and diversity, especially in creative work. This reduction in variance has already led to problems in model performance, including model collapse and hallucination. In this paper, we examine the long-term consequences of AI use for human cultural evolution and the conditions under which widespread AI use may lead to "cultural collapse", a process in which reliance on AI-generated content reduces human variation and innovation and slows cumulative cultural evolution. Using an agent-based model and evolutionary game theory, we compare two types of AI use: complement and substitute. AI-complement users seek suggestions and guidance while remaining the main producers of the final output, whereas AI-substitute users provide minimal input, and rely on AI to produce most of the output. We then study how these use strategies compete and spread under evolutionary dynamics. We find that AI-substitute users prevail under individual-level selection despite the stronger reduction in cultural variance. By contrast, AI-complement users can benefit their groups by maintaining the variance needed for exploration, and can therefore be favored under cultural group selection when group boundaries are strong. Overall, our findings shed light on the long-term, population-level effects of AI adoption and inform policy and organizational strategies to mitigate these risks.
Related papers
- Align When They Want, Complement When They Need! Human-Centered Ensembles for Adaptive Human-AI Collaboration [13.041288521972563]
In human-AI decision making, designing AI that complements human expertise has been a natural strategy to enhance human-AI collaboration.<n>An aligned AI fosters trust yet risks reinforcing suboptimal human behavior and lowering human-AI team performance.<n>We introduce a novel human-centered adaptive AI ensemble that strategically toggles between two specialist AI models.
arXiv Detail & Related papers (2026-02-23T18:22:58Z) - Modeling AI-Human Collaboration as a Multi-Agent Adaptation [0.0]
We develop an agent-based simulation to formalize AI-human collaboration as a function of a task.<n>We show that in modular tasks, AI often substitutes for humans - delivering higher payoffs unless human expertise is very high.<n>We also show that even "hallucinatory" AI - lacking memory or structure - can improve outcomes when augmenting low-capability humans by helping escape local optima.
arXiv Detail & Related papers (2025-04-29T16:19:53Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Societal Adaptation to Advanced AI [1.2607853680700076]
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse.<n>We urge a complementary approach: increasing societal adaptation to advanced AI.<n>We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems.
arXiv Detail & Related papers (2024-05-16T17:52:12Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Human-AI Interactions and Societal Pitfalls [3.4471935446780355]
When working with generative artificial intelligence (AI), users may see productivity gains, but the AI-generated content may not match their preferences exactly.<n>We show that the interplay between individual-level decisions and AI training may lead to societal challenges.
arXiv Detail & Related papers (2023-09-19T09:09:59Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.