Explainable Artificial Intelligence and Machine Learning: A reality
rooted perspective
- URL: http://arxiv.org/abs/2001.09464v1
- Date: Sun, 26 Jan 2020 15:09:45 GMT
- Title: Explainable Artificial Intelligence and Machine Learning: A reality
rooted perspective
- Authors: Frank Emmert-Streib, Olli Yli-Harja, and Matthias Dehmer
- Abstract summary: We provide a discussion what explainable AI can be.
We do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are used to the availability of big data generated in nearly all fields of
science as a consequence of technological progress. However, the analysis of
such data possess vast challenges. One of these relates to the explainability
of artificial intelligence (AI) or machine learning methods. Currently, many of
such methods are non-transparent with respect to their working mechanism and
for this reason are called black box models, most notably deep learning
methods. However, it has been realized that this constitutes severe problems
for a number of fields including the health sciences and criminal justice and
arguments have been brought forward in favor of an explainable AI. In this
paper, we do not assume the usual perspective presenting explainable AI as it
should be, but rather we provide a discussion what explainable AI can be. The
difference is that we do not present wishful thinking but reality grounded
properties in relation to a scientific theory beyond physics.
Related papers
- AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - A Review on Objective-Driven Artificial Intelligence [0.0]
Humans have an innate ability to understand context, nuances, and subtle cues in communication.
Humans possess a vast repository of common-sense knowledge that helps us make logical inferences and predictions about the world.
Machines lack this innate understanding and often struggle with making sense of situations that humans find trivial.
arXiv Detail & Related papers (2023-08-20T02:07:42Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [268.585904751315]
New area of research known as AI for science (AI4Science)
Areas aim at understanding the physical world from subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales.
Key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods.
arXiv Detail & Related papers (2023-07-17T12:14:14Z) - Reliable AI: Does the Next Generation Require Quantum Computing? [71.84486326350338]
We show that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations.
In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
arXiv Detail & Related papers (2023-07-03T19:10:45Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Challenges of Artificial Intelligence -- From Machine Learning and
Computer Vision to Emotional Intelligence [0.0]
We believe that AI is a helper, not a ruler of humans.
Computer vision has been central to the development of AI.
Emotions are central to human intelligence, but little use has been made in AI.
arXiv Detail & Related papers (2022-01-05T06:00:22Z) - Explainable AI: current status and future directions [11.92436948211501]
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI)
XAI can explain how AI obtained a particular solution and can also answer other "wh" questions.
This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view.
arXiv Detail & Related papers (2021-07-12T08:42:19Z) - Brittle AI, Causal Confusion, and Bad Mental Models: Challenges and
Successes in the XAI Program [17.52385105997044]
Deep neural network driven models have surpassed human level performance in benchmark autonomy tasks.
The underlying policies for these agents, however, are not easily interpretable.
This paper discusses the origins of these takeaways, provides amplifying information, and suggestions for future work.
arXiv Detail & Related papers (2021-06-10T05:21:10Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.