Reflections on "Can AI Understand Our Universe?"
- URL: http://arxiv.org/abs/2501.17507v1
- Date: Wed, 29 Jan 2025 09:24:47 GMT
- Title: Reflections on "Can AI Understand Our Universe?"
- Authors: Yu Wang,
- Abstract summary: It focuses on two concepts of understanding: intuition and causality, and highlights three AI technologies: Transformers, chain-of-thought reasoning, and multimodal processing.<n>We anticipate that in principle AI could form understanding, with these technologies representing promising advancements.
- Score: 3.19428095493284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article briefly discusses the philosophical and technical aspects of AI. It focuses on two concepts of understanding: intuition and causality, and highlights three AI technologies: Transformers, chain-of-thought reasoning, and multimodal processing. We anticipate that in principle AI could form understanding, with these technologies representing promising advancements.
Related papers
- Ethics through the Facets of Artificial Intelligence [0.0]
We argue that concerns stem from a blurred understanding of AI, how it can be used, and how it has been interpreted in society.<n>We propose a framework for the ethical assessment of the use of AI.
arXiv Detail & Related papers (2025-07-22T21:21:37Z) - Explainable AI the Latest Advancements and New Trends [0.0]
The concept of trustworthiness is cross-disciplinary; it must meet societal standards and principles.<n>We elaborate on the strong link between the explainability of AI and the meta-reasoning of autonomous systems.<n>The integration of approaches could pave the way for future interpretable AI systems.
arXiv Detail & Related papers (2025-05-11T15:01:12Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Five questions and answers about artificial intelligence [0.0]
Rapid advances in Artificial Intelligence (AI) are generating much controversy in society.
This paper seeks to contribute to the dissemination of knowledge about AI.
arXiv Detail & Related papers (2024-09-24T09:19:55Z) - Making AI Intelligible: Philosophical Foundations [0.0]
'Making AI Intelligible' shows that philosophical work on the metaphysics of meaning can help answer these questions.
Author: The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications.
arXiv Detail & Related papers (2024-06-12T12:25:04Z) - Concept Alignment [10.285482205152729]
We argue that before we can attempt to align values, it is imperative that AI systems and humans align the concepts they use to understand the world.
We integrate ideas from philosophy, cognitive science, and deep learning to explain the need for concept alignment.
arXiv Detail & Related papers (2024-01-09T23:32:18Z) - Artificial Intelligence: 70 Years Down the Road [4.952211615828121]
We have analyzed the reasons from both technical and philosophical perspectives to help understand the reasons behind the past failures and current successes of AI.
We have concluded that the sustainable development direction of AI should be human-machine collaboration and a technology path centered on computing power.
arXiv Detail & Related papers (2023-03-06T01:19:25Z) - Unpacking the "Black Box" of AI in Education [0.0]
We seek to clarify what "AI" is and the potential it holds to both advance and hamper educational opportunities that may improve the human condition.
We offer a basic introduction to different methods and philosophies underpinning AI, discuss recent advances, explore applications to education, and highlight key limitations and risks.
Our hope is to make often jargon-laden terms and concepts accessible, so that all are equipped to understand, interrogate, and ultimately shape the development of human centered AI in education.
arXiv Detail & Related papers (2022-12-31T18:27:21Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Mutual Theory of Mind for Human-AI Communication [5.969858080492586]
New developments are enabling AI systems to perceive, recognize, and respond with social cues based on humans' explicit or implicit behavioral and verbal cues.
These AI systems are currently serving as matchmakers on dating platforms, assisting student learning as teaching assistants, and enhancing productivity as work partners.
We propose the Mutual Theory of Mind (MToM) framework, inspired by our capability of ToM in human-human communications, to guide this new generation of HAI research.
arXiv Detail & Related papers (2022-10-07T22:46:04Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.