Examining the Differential Risk from High-level Artificial Intelligence
and the Question of Control
- URL: http://arxiv.org/abs/2211.03157v4
- Date: Fri, 24 Nov 2023 06:04:31 GMT
- Title: Examining the Differential Risk from High-level Artificial Intelligence
and the Question of Control
- Authors: Kyle A. Kilian, Christopher J. Ventura, and Mark M. Bailey
- Abstract summary: The extent and scope of future AI capabilities remain a key uncertainty.
There are concerns over the extent of integration and oversight of AI opaque decision processes.
This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) is one of the most transformative technologies
of the 21st century. The extent and scope of future AI capabilities remain a
key uncertainty, with widespread disagreement on timelines and potential
impacts. As nations and technology companies race toward greater complexity and
autonomy in AI systems, there are concerns over the extent of integration and
oversight of opaque AI decision processes. This is especially true in the
subfield of machine learning (ML), where systems learn to optimize objectives
without human assistance. Objectives can be imperfectly specified or executed
in an unexpected or potentially harmful way. This becomes more concerning as
systems increase in power and autonomy, where an abrupt capability jump could
result in unexpected shifts in power dynamics or even catastrophic failures.
This study presents a hierarchical complex systems framework to model AI risk
and provide a template for alternative futures analysis. Survey data were
collected from domain experts in the public and private sectors to classify AI
impact and likelihood. The results show increased uncertainty over the powerful
AI agent scenario, confidence in multiagent environments, and increased concern
over AI alignment failures and influence-seeking behavior.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Societal Adaptation to Advanced AI [1.2607853680700076]
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse.
We urge a complementary approach: increasing societal adaptation to advanced AI.
We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems.
arXiv Detail & Related papers (2024-05-16T17:52:12Z) - Artificial intelligence and the transformation of higher education
institutions [0.0]
This article develops a causal loop diagram (CLD) to map the causal feedback mechanisms of AI transformation in a typical HEI.
Our model accounts for the forces that drive the AI transformation and the consequences of the AI transformation on value creation in a typical HEI.
The article identifies and analyzes several reinforcing and balancing feedback loops, showing how the HEI invests in AI to improve student learning, research, and administration.
arXiv Detail & Related papers (2024-02-13T00:36:10Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Potentiality and Awareness: A Position Paper from the Perspective of
Human-AI Teaming in Cybersecurity [18.324118502535775]
We argue that human-AI teaming is worthwhile in cybersecurity.
We emphasize the importance of a balanced approach that incorporates AI's computational power with human expertise.
arXiv Detail & Related papers (2023-09-28T01:20:44Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.