Making AI Intelligible: Philosophical Foundations
- URL: http://arxiv.org/abs/2406.08134v1
- Date: Wed, 12 Jun 2024 12:25:04 GMT
- Title: Making AI Intelligible: Philosophical Foundations
- Authors: Herman Cappelen, Josh Dever,
- Abstract summary: 'Making AI Intelligible' shows that philosophical work on the metaphysics of meaning can help answer these questions.
Author: The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can humans and artificial intelligences share concepts and communicate? 'Making AI Intelligible' shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. Many important decisions about human life are now influenced by AI. In giving that power to AI, we presuppose that AIs can track features of the world that we care about (for example, creditworthiness, recidivism, cancer, and combatants). If AIs can share our concepts, that will go some way towards justifying this reliance on AI. This ground-breaking study offers insight into how to take some first steps towards achieving Interpretable AI.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - A method for the ethical analysis of brain-inspired AI [0.8431877864777444]
This article examines some conceptual, technical, and ethical issues raised by the development and use of brain-inspired AI.
The aim of the paper is to introduce a method that can be applied to identify and address the ethical issues arising from brain-inspired AI.
arXiv Detail & Related papers (2023-05-18T12:56:27Z) - Unpacking the "Black Box" of AI in Education [0.0]
We seek to clarify what "AI" is and the potential it holds to both advance and hamper educational opportunities that may improve the human condition.
We offer a basic introduction to different methods and philosophies underpinning AI, discuss recent advances, explore applications to education, and highlight key limitations and risks.
Our hope is to make often jargon-laden terms and concepts accessible, so that all are equipped to understand, interrogate, and ultimately shape the development of human centered AI in education.
arXiv Detail & Related papers (2022-12-31T18:27:21Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Challenges of Artificial Intelligence -- From Machine Learning and
Computer Vision to Emotional Intelligence [0.0]
We believe that AI is a helper, not a ruler of humans.
Computer vision has been central to the development of AI.
Emotions are central to human intelligence, but little use has been made in AI.
arXiv Detail & Related papers (2022-01-05T06:00:22Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Expected Utilitarianism [0.0]
We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most of the attitudes towards AI research.
We want AI to help, not hinder, humans. Yet what exactly this entails in theory and in practice is not immediately apparent.
We have two conclusions that arise from this. First, is that if one believes that a beneficial AI is an ethical AI, then one is committed to a framework that posits 'benefit' is tantamount to the greatest good for the greatest number.
Second, if the AI relies on RL, then the way it reasons about itself, the environment, and other agents
arXiv Detail & Related papers (2020-07-19T15:44:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.