Impossibility Results in AI: A Survey
- URL: http://arxiv.org/abs/2109.00484v1
- Date: Wed, 1 Sep 2021 16:52:13 GMT
- Title: Impossibility Results in AI: A Survey
- Authors: Mario Brcic and Roman V. Yampolskiy
- Abstract summary: An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim.
We have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability.
We conclude that deductive impossibilities deny 100%-guarantees for security.
- Score: 3.198144010381572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An impossibility theorem demonstrates that a particular problem or set of
problems cannot be solved as described in the claim. Such theorems put limits
on what is possible to do concerning artificial intelligence, especially the
super-intelligent one. As such, these results serve as guidelines, reminders,
and warnings to AI safety, AI policy, and governance researchers. These might
enable solutions to some long-standing questions in the form of formalizing
theories in the framework of constraint satisfaction without committing to one
option. In this paper, we have categorized impossibility theorems applicable to
the domain of AI into five categories: deduction, indistinguishability,
induction, tradeoffs, and intractability. We found that certain theorems are
too specific or have implicit assumptions that limit application. Also, we
added a new result (theorem) about the unfairness of explainability, the first
explainability-related result in the induction category. We concluded that
deductive impossibilities deny 100%-guarantees for security. In the end, we
give some ideas that hold potential in explainability, controllability, value
alignment, ethics, and group decision-making. They can be deepened by further
investigation.
Related papers
- Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Betting on what is neither verifiable nor falsifiable [18.688474183114085]
We propose an approach to betting on such events via options, or equivalently as bets on the outcome of a "verification-falsification game"
Our work thus acts as an alternative to the existing framework of Garrabrant induction for logical uncertainty, and relates to the stance known as constructivism in the philosophy of mathematics.
arXiv Detail & Related papers (2024-01-29T17:30:34Z) - A criterion for Artificial General Intelligence: hypothetic-deductive
reasoning, tested on ChatGPT [0.0]
We argue that a key reasoning skill that any advanced AI, say GPT-4, should master in order to qualify as 'thinking machine', or AGI, is hypothetic-deductive reasoning.
We propose simple tests for both types of reasoning, and apply them to ChatGPT.
arXiv Detail & Related papers (2023-08-05T20:33:13Z) - TheoremQA: A Theorem-driven Question Answering dataset [100.39878559382694]
GPT-4's capabilities to solve these problems are unparalleled, achieving an accuracy of 51% with Program-of-Thoughts Prompting.
TheoremQA is curated by domain experts containing 800 high-quality questions covering 350 theorems.
arXiv Detail & Related papers (2023-05-21T17:51:35Z) - Epistemic considerations when AI answers questions for us [0.0]
We argue that careless reliance on AI to answer our questions and to judge our output is a violation of Grice's Maxim of Quality and Lemoine's legal Maxim of Innocence.
What is missing in the focus on output and results of AI-generated and AI-evaluated content is, apart from paying proper tribute, the demand to follow a person's thought process.
arXiv Detail & Related papers (2023-04-23T08:26:42Z) - Non-ground Abductive Logic Programming with Probabilistic Integrity
Constraints [2.624902795082451]
In this paper, we consider a richer logic language, coping with probabilistic abduction with variables.
We first present the overall abductive language, and its semantics according to the Distribution Semantics.
We then introduce a proof procedure, obtained by extending one previously presented, and prove its soundness and completeness.
arXiv Detail & Related papers (2021-08-06T10:22:12Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Explaining AI as an Exploratory Process: The Peircean Abduction Model [0.2676349883103404]
Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
arXiv Detail & Related papers (2020-09-30T17:10:37Z) - Learning to Prove Theorems by Learning to Generate Theorems [71.46963489866596]
We learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.
Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover.
arXiv Detail & Related papers (2020-02-17T16:06:02Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.