Why AI is Harder Than We Think
- URL: http://arxiv.org/abs/2104.12871v2
- Date: Wed, 28 Apr 2021 15:51:25 GMT
- Title: Why AI is Harder Than We Think
- Authors: Melanie Mitchell
- Abstract summary: The field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment.
I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field.
I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since its beginning in the 1950s, the field of artificial intelligence has
cycled several times between periods of optimistic predictions and massive
investment ("AI spring") and periods of disappointment, loss of confidence, and
reduced funding ("AI winter"). Even with today's seemingly fast pace of AI
breakthroughs, the development of long-promised technologies such as
self-driving cars, housekeeping robots, and conversational companions has
turned out to be much harder than many people expected. One reason for these
repeating cycles is our limited understanding of the nature and complexity of
intelligence itself. In this paper I describe four fallacies in common
assumptions made by AI researchers, which can lead to overconfident predictions
about the field. I conclude by discussing the open questions spurred by these
fallacies, including the age-old challenge of imbuing machines with humanlike
common sense.
Related papers
- Five questions and answers about artificial intelligence [0.0]
Rapid advances in Artificial Intelligence (AI) are generating much controversy in society.
This paper seeks to contribute to the dissemination of knowledge about AI.
arXiv Detail & Related papers (2024-09-24T09:19:55Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - What Do People Think about Sentient AI? [0.0]
We present the first nationally representative survey data on the topic of sentient AI.
Across one wave of data collection in 2021 and two in 2023, we found mind perception and moral concern for AI well-being was higher than predicted.
We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction.
arXiv Detail & Related papers (2024-07-11T21:04:39Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - A brief history of AI: how to prevent another winter (a critical review) [0.6299766708197883]
We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present.
In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'
arXiv Detail & Related papers (2021-09-03T13:41:46Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - A clarification of misconceptions, myths and desired status of
artificial intelligence [0.0]
We present a perspective on the desired and current status of AI in relation to machine learning and statistics.
Our discussion is intended to uncurtain the veil of vagueness surrounding AI to see its true countenance.
arXiv Detail & Related papers (2020-08-03T17:22:53Z) - Artificial Intelligence is stupid and causal reasoning won't fix it [0.0]
Key, Judea Pearl suggests, is to replace reasoning by association with causal-reasoning.
It is not so much that AI machinery cannot grasp causality, but that AI machinery - qua computation - cannot understand anything at all.
arXiv Detail & Related papers (2020-07-20T22:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.