Artificial Intelligence is stupid and causal reasoning won't fix it
- URL: http://arxiv.org/abs/2008.07371v1
- Date: Mon, 20 Jul 2020 22:23:50 GMT
- Title: Artificial Intelligence is stupid and causal reasoning won't fix it
- Authors: John Mark Bishop
- Abstract summary: Key, Judea Pearl suggests, is to replace reasoning by association with causal-reasoning.
It is not so much that AI machinery cannot grasp causality, but that AI machinery - qua computation - cannot understand anything at all.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Neural Networks have reached Grandmaster and even super-human
performance across a variety of games: from those involving perfect-information
(such as Go) to those involving imperfect-information (such as Starcraft). Such
technological developments from AI-labs have ushered concomitant applications
across the world of business - where an AI brand tag is fast becoming
ubiquitous. A corollary of such widespread commercial deployment is that when
AI gets things wrong - an autonomous vehicle crashes; a chatbot exhibits racist
behaviour; automated credit scoring processes discriminate on gender etc. -
there are often significant financial, legal and brand consequences and the
incident becomes major news. As Judea Pearl sees it, the underlying reason for
such mistakes is that, 'all the impressive achievements of deep learning amount
to just curve fitting'. The key, Judea Pearl suggests, is to replace reasoning
by association with causal-reasoning - the ability to infer causes from
observed phenomena. It is a point that was echoed by Gary Marcus and Ernest
Davis in a recent piece for the New York Times: 'we need to stop building
computer systems that merely get better and better at detecting statistical
patterns in data sets - often using an approach known as Deep Learning - and
start building computer systems that from the moment of their assembly innately
grasp three basic concepts: time, space and causality'. In this paper,
foregrounding what in 1949 Gilbert Ryle termed a category mistake, I will offer
an alternative explanation for AI errors: it is not so much that AI machinery
cannot grasp causality, but that AI machinery - qua computation - cannot
understand anything at all.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Reliable AI: Does the Next Generation Require Quantum Computing? [71.84486326350338]
We show that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations.
In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
arXiv Detail & Related papers (2023-07-03T19:10:45Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Hybrid Intelligence [4.508830262248694]
We argue that the most likely paradigm for the division of labor between humans and machines in the next decades is Hybrid Intelligence.
This concept aims at using the complementary strengths of human intelligence and AI, so that they can perform better than each of the two could separately.
arXiv Detail & Related papers (2021-05-03T08:56:09Z) - Why AI is Harder Than We Think [0.0]
The field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment.
I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field.
I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.
arXiv Detail & Related papers (2021-04-26T20:39:18Z) - A clarification of misconceptions, myths and desired status of
artificial intelligence [0.0]
We present a perspective on the desired and current status of AI in relation to machine learning and statistics.
Our discussion is intended to uncurtain the veil of vagueness surrounding AI to see its true countenance.
arXiv Detail & Related papers (2020-08-03T17:22:53Z) - Towards AI Forensics: Did the Artificial Intelligence System Do It? [2.5991265608180396]
We focus on AI that is potentially malicious by design'' and grey box analysis.
Our evaluation using convolutional neural networks illustrates challenges and ideas for identifying malicious AI.
arXiv Detail & Related papers (2020-05-27T20:28:19Z) - On Adversarial Examples and Stealth Attacks in Artificial Intelligence
Systems [62.997667081978825]
We present a formal framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems.
The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification.
The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself.
arXiv Detail & Related papers (2020-04-09T10:56:53Z) - Explainable Artificial Intelligence and Machine Learning: A reality
rooted perspective [0.0]
We provide a discussion what explainable AI can be.
We do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics.
arXiv Detail & Related papers (2020-01-26T15:09:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.