Impossibility Results in AI: A Survey
- URL: http://arxiv.org/abs/2109.00484v1
- Date: Wed, 1 Sep 2021 16:52:13 GMT
- Title: Impossibility Results in AI: A Survey
- Authors: Mario Brcic and Roman V. Yampolskiy
- Abstract summary: An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim.
We have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability.
We conclude that deductive impossibilities deny 100%-guarantees for security.
- Score: 3.198144010381572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An impossibility theorem demonstrates that a particular problem or set of
problems cannot be solved as described in the claim. Such theorems put limits
on what is possible to do concerning artificial intelligence, especially the
super-intelligent one. As such, these results serve as guidelines, reminders,
and warnings to AI safety, AI policy, and governance researchers. These might
enable solutions to some long-standing questions in the form of formalizing
theories in the framework of constraint satisfaction without committing to one
option. In this paper, we have categorized impossibility theorems applicable to
the domain of AI into five categories: deduction, indistinguishability,
induction, tradeoffs, and intractability. We found that certain theorems are
too specific or have implicit assumptions that limit application. Also, we
added a new result (theorem) about the unfairness of explainability, the first
explainability-related result in the induction category. We concluded that
deductive impossibilities deny 100%-guarantees for security. In the end, we
give some ideas that hold potential in explainability, controllability, value
alignment, ethics, and group decision-making. They can be deepened by further
investigation.
Related papers
- Formal Mathematical Reasoning: A New Frontier in AI [60.26950681543385]
We advocate for formal mathematical reasoning and argue that it is indispensable for advancing AI4Math to the next level.
We summarize existing progress, discuss open challenges, and envision critical milestones to measure future success.
arXiv Detail & Related papers (2024-12-20T17:19:24Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Can a Bayesian Oracle Prevent Harm from an Agent? [48.12936383352277]
We consider estimating a context-dependent bound on the probability of violating a given safety specification.
Noting that different plausible hypotheses about the world could produce very different outcomes, we derive on the safety violation probability predicted under the true but unknown hypothesis.
We consider two forms of this result, in the iid case and in the non-iid case, and conclude with open problems towards turning such results into practical AI guardrails.
arXiv Detail & Related papers (2024-08-09T18:10:42Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - When Is Inductive Inference Possible? [3.4991031406102238]
We provide a tight characterization of inductive inference by establishing a novel link to online learning theory.
We prove that inductive inference is possible if and only if the hypothesis class is a countable union of online learnable classes.
Our main technical tool is a novel non-uniform online learning framework, which may be of independent interest.
arXiv Detail & Related papers (2023-11-30T20:02:25Z) - A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT [0.0]
We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning.
We propose simple tests for both types of reasoning, and apply them to ChatGPT.
arXiv Detail & Related papers (2023-08-05T20:33:13Z) - TheoremQA: A Theorem-driven Question Answering dataset [100.39878559382694]
GPT-4's capabilities to solve these problems are unparalleled, achieving an accuracy of 51% with Program-of-Thoughts Prompting.
TheoremQA is curated by domain experts containing 800 high-quality questions covering 350 theorems.
arXiv Detail & Related papers (2023-05-21T17:51:35Z) - Non-ground Abductive Logic Programming with Probabilistic Integrity
Constraints [2.624902795082451]
In this paper, we consider a richer logic language, coping with probabilistic abduction with variables.
We first present the overall abductive language, and its semantics according to the Distribution Semantics.
We then introduce a proof procedure, obtained by extending one previously presented, and prove its soundness and completeness.
arXiv Detail & Related papers (2021-08-06T10:22:12Z) - Explaining AI as an Exploratory Process: The Peircean Abduction Model [0.2676349883103404]
Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
arXiv Detail & Related papers (2020-09-30T17:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.