The Reasoning Under Uncertainty Trap: A Structural AI Risk
- URL: http://arxiv.org/abs/2402.01743v1
- Date: Mon, 29 Jan 2024 17:16:57 GMT
- Title: The Reasoning Under Uncertainty Trap: A Structural AI Risk
- Authors: Toby D. Pilditch
- Abstract summary: Report provides an exposition of what makes RUU so challenging for both humans and machines.
We detail how this misuse risk connects to a wider network of underlying structural risks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This report examines a novel risk associated with current (and projected) AI
tools. Making effective decisions about future actions requires us to reason
under uncertainty (RUU), and doing so is essential to many critical real world
problems. Overfaced by this challenge, there is growing demand for AI tools
like LLMs to assist decision-makers. Having evidenced this demand and the
incentives behind it, we expose a growing risk: we 1) do not currently
sufficiently understand LLM capabilities in this regard, and 2) have no
guarantees of performance given fundamental computational explosiveness and
deep uncertainty constraints on accuracy. This report provides an exposition of
what makes RUU so challenging for both humans and machines, and relates these
difficulties to prospective AI timelines and capabilities. Having established
this current potential misuse risk, we go on to expose how this seemingly
additive risk (more misuse additively contributed to potential harm) in fact
has multiplicative properties. Specifically, we detail how this misuse risk
connects to a wider network of underlying structural risks (e.g., shifting
incentives, limited transparency, and feedback loops) to produce non-linear
harms. We go on to provide a solutions roadmap that targets multiple leverage
points in the structure of the problem. This includes recommendations for all
involved actors (prospective users, developers, and policy-makers) and enfolds
insights from areas including Decision-making Under Deep Uncertainty and
complex systems theory. We argue this report serves not only to raise awareness
(and subsequently mitigate/correct) of a current, novel AI risk, but also
awareness of the underlying class of structural risks by illustrating how their
interconnected nature poses twin-dangers of camouflaging their presence, whilst
amplifying their potential effects.
Related papers
- Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - AI and the Iterable Epistopics of Risk [1.26404863283601]
The risks AI presents to society are broadly understood to be manageable through general calculus.
This paper elaborates how risk is apprehended and managed by a regulator, developer and cyber-security expert.
arXiv Detail & Related papers (2024-04-29T13:33:22Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - On Catastrophic Inheritance of Large Foundation Models [51.41727422011327]
Large foundation models (LFMs) are claiming incredible performances. Yet great concerns have been raised about their mythic and uninterpreted potentials.
We propose to identify a neglected issue deeply rooted in LFMs: Catastrophic Inheritance.
We discuss the challenges behind this issue and propose UIM, a framework to understand the catastrophic inheritance of LFMs from both pre-training and downstream adaptation.
arXiv Detail & Related papers (2024-02-02T21:21:55Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - Anticipatory Thinking Challenges in Open Worlds: Risk Management [7.820667552233988]
As AI systems become part of everyday life, they too have begun to manage risk.
Learning to identify and mitigate low-frequency, high-impact risks is at odds with the observational bias required to train machine learning models.
Our goal is to spur research into solutions that assess and improve the anticipatory thinking required by AI agents to manage risk in open-worlds and ultimately the real-world.
arXiv Detail & Related papers (2023-06-22T18:31:17Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof [1.7510020208193926]
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
arXiv Detail & Related papers (2022-11-03T13:55:36Z) - Current and Near-Term AI as a Potential Existential Risk Factor [5.1806669555925975]
We problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk.
We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors.
Our main contribution is an exposition of potential AI risk factors and the causal relationships between them.
arXiv Detail & Related papers (2022-09-21T18:56:14Z) - Ethical and social risks of harm from Language Models [22.964941107198023]
This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs)
A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences.
arXiv Detail & Related papers (2021-12-08T16:09:48Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.