Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis
- URL: http://arxiv.org/abs/2509.02650v1
- Date: Tue, 02 Sep 2025 12:13:34 GMT
- Title: Can Media Act as a Soft Regulator of Safe AI Development? A Game Theoretical Analysis
- Authors: Henrique Correia da Fonseca, António Fernandes, Zhao Song, Theodor Cimpeanu, Nataliya Balabanova, Adeela Bashir, Paolo Bova, Alessio Buscemi, Alessandro Di Stefano, Manh Hong Duong, Elias Fernandez Domingos, Ndidi Bianca Ogbo, Simon T. Powers, Daniele Proverbio, Zia Ush Shamszaman, Fernando P. Santos, The Anh Han, Marcus Krellner,
- Abstract summary: We study whether media coverage has the potential to push AI creators into the production of safe products.<n>Our results reveal that media is indeed able to foster cooperation between creators and users, but not always.<n>By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator.
- Score: 57.68073583427415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When developers of artificial intelligence (AI) products need to decide between profit and safety for the users, they likely choose profit. Untrustworthy AI technology must come packaged with tangible negative consequences. Here, we envisage those consequences as the loss of reputation caused by media coverage of their misdeeds, disseminated to the public. We explore whether media coverage has the potential to push AI creators into the production of safe products, enabling widespread adoption of AI technology. We created artificial populations of self-interested creators and users and studied them through the lens of evolutionary game theory. Our results reveal that media is indeed able to foster cooperation between creators and users, but not always. Cooperation does not evolve if the quality of the information provided by the media is not reliable enough, or if the costs of either accessing media or ensuring safety are too high. By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator -- guiding AI safety even in the absence of formal government oversight.
Related papers
- "We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe [56.1653658714305]
We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses.<n>We find that there is little consensus among AI developers on the relative ranking of privacy risks.<n>While AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption.
arXiv Detail & Related papers (2025-10-01T13:51:33Z) - Designing AI-Enabled Countermeasures to Cognitive Warfare [0.0]
Foreign information operations on social media platforms pose significant risks to democratic societies.<n>With the rise of Artificial Intelligence (AI), this threat is likely to intensify, potentially overwhelming human defenders.<n>This paper proposes possible AI-enabled countermeasures against cognitive warfare.
arXiv Detail & Related papers (2025-04-14T11:36:03Z) - Who is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents [29.070947259551478]
We analyzed 202 real-world AI privacy and ethical incidents.<n>This produced a taxonomy that classifies incident types across AI lifecycle stages.<n>It accounts for contextual factors such as causes, responsible entities, disclosure sources, and impacts.
arXiv Detail & Related papers (2025-03-28T21:57:38Z) - A+AI: Threats to Society, Remedies, and Governance [0.0]
This document focuses on the threats, especially near-term threats, that Artificial Intelligence (AI) brings to society.
It includes a table showing which countermeasures are likely to mitigate which threats.
The paper lists specific actions government should take as soon as possible.
arXiv Detail & Related papers (2024-09-03T18:43:47Z) - AI Safety: Necessary, but insufficient and possibly problematic [1.6797508081737678]
This article critically examines the recent hype around AI safety.
We consider what 'AI safety' actually means, and outline the dominant concepts that the digital footprint of AI safety aligns with.
We share our concerns on how AI safety may normalize AI that advances structural harm through providing exploitative and harmful AI with a veneer of safety.
arXiv Detail & Related papers (2024-03-26T06:18:42Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Filling gaps in trustworthy development of AI [20.354549569362035]
Growing awareness of potential risks from AI systems has spurred action to address those risks.
But the principles often leave a gap between the "what" and the "how" of trustworthy AI development.
There is thus an urgent need for concrete methods that both enable AI developers to prevent harm and allow them to demonstrate their trustworthiness.
arXiv Detail & Related papers (2021-12-14T22:45:28Z) - Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management [0.0]
Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place.
We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative design principles.
arXiv Detail & Related papers (2021-09-21T21:22:58Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.