The Problem of Algorithmic Collisions: Mitigating Unforeseen Risks in a Connected World
- URL: http://arxiv.org/abs/2505.20181v1
- Date: Mon, 26 May 2025 16:22:18 GMT
- Title: The Problem of Algorithmic Collisions: Mitigating Unforeseen Risks in a Connected World
- Authors: Maurice Chiodo, Dennis Müller,
- Abstract summary: The increasing deployment of Artificial Intelligence (AI) and other autonomous algorithmic systems presents the world with new systemic risks.<n>Current governance frameworks are inadequate as they lack visibility into this complex ecosystem of interactions.<n>This paper outlines the nature of this challenge and proposes some initial policy suggestions centered on increasing transparency and accountability through phased system registration, a licensing framework for deployment, and enhanced monitoring capabilities.
- Score: 2.8775022881551666
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The increasing deployment of Artificial Intelligence (AI) and other autonomous algorithmic systems presents the world with new systemic risks. While focus often lies on the function of individual algorithms, a critical and underestimated danger arises from their interactions, particularly when algorithmic systems operate without awareness of each other, or when those deploying them are unaware of the full algorithmic ecosystem deployment is occurring in. These interactions can lead to unforeseen, rapidly escalating negative outcomes - from market crashes and energy supply disruptions to potential physical accidents and erosion of public trust - often exceeding the human capacity for effective monitoring and the legal capacities for proper intervention. Current governance frameworks are inadequate as they lack visibility into this complex ecosystem of interactions. This paper outlines the nature of this challenge and proposes some initial policy suggestions centered on increasing transparency and accountability through phased system registration, a licensing framework for deployment, and enhanced monitoring capabilities.
Related papers
- When Autonomy Goes Rogue: Preparing for Risks of Multi-Agent Collusion in Social Systems [78.04679174291329]
We introduce a proof-of-concept to simulate the risks of malicious multi-agent systems (MAS)<n>We apply this framework to two high-risk fields: misinformation spread and e-commerce fraud.<n>Our findings show that decentralized systems are more effective at carrying out malicious actions than centralized ones.
arXiv Detail & Related papers (2025-07-19T15:17:30Z) - SafeMobile: Chain-level Jailbreak Detection and Automated Evaluation for Multimodal Mobile Agents [58.21223208538351]
This work explores the security issues surrounding mobile multimodal agents.<n>It attempts to construct a risk discrimination mechanism by incorporating behavioral sequence information.<n>It also designs an automated assisted assessment scheme based on a large language model.
arXiv Detail & Related papers (2025-07-01T15:10:00Z) - Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents [0.0]
Decentralized AI agents will soon interact across internet platforms, creating security challenges beyond traditional cybersecurity and AI safety frameworks.<n>We introduce textbfmulti-agent security, a new field dedicated to securing networks of decentralized AI agents against threats that emerge or amplify through their interactions.
arXiv Detail & Related papers (2025-05-04T12:03:29Z) - Multi-Agent Risks from Advanced AI [90.74347101431474]
Multi-agent systems of advanced AI pose novel and under-explored risks.<n>We identify three key failure modes based on agents' incentives, as well as seven key risk factors.<n>We highlight several important instances of each risk, as well as promising directions to help mitigate them.
arXiv Detail & Related papers (2025-02-19T23:03:21Z) - VulRG: Multi-Level Explainable Vulnerability Patch Ranking for Complex Systems Using Graphs [20.407534993667607]
This work introduces a graph-based framework for vulnerability patch prioritization.<n>It integrates diverse data sources and metrics into a universally applicable model.<n> refined risk metrics enable detailed assessments at the component, asset, and system levels.
arXiv Detail & Related papers (2025-02-16T14:21:52Z) - Safety is Essential for Responsible Open-Ended Systems [47.172735322186]
Open-Endedness is the ability of AI systems to continuously and autonomously generate novel and diverse artifacts or solutions.<n>This position paper argues that the inherently dynamic and self-propagating nature of Open-Ended AI introduces significant, underexplored risks.
arXiv Detail & Related papers (2025-02-06T21:32:07Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Advancements In Crowd-Monitoring System: A Comprehensive Analysis of
Systematic Approaches and Automation Algorithms: State-of-The-Art [0.0]
Crowd monitoring systems depend on a bifurcated approach, encompassing vision-based and non-vision-based technologies.
This paper endeavors to present an in-depth analysis of the recent incorporation of artificial intelligence (AI) algorithms and models into automated systems.
arXiv Detail & Related papers (2023-08-07T20:50:48Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Harms from Increasingly Agentic Algorithmic Systems [21.613581713046464]
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm.
Despite ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms.
arXiv Detail & Related papers (2023-02-20T21:42:41Z) - SOTIF Entropy: Online SOTIF Risk Quantification and Mitigation for
Autonomous Driving [16.78084912175149]
This paper proposes the "Self-Surveillance and Self-Adaption System" as a systematic approach to online minimize the SOTIF risk.
The core of this system is the risk monitoring of the implemented artificial intelligence algorithms within the autonomous vehicles.
The inherent perception algorithm risk and external collision risk are jointly quantified via SOTIF entropy.
arXiv Detail & Related papers (2022-11-08T05:02:12Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.