The Hermeneutic Turn of AI: Are Machines Capable of Interpreting?
- URL: http://arxiv.org/abs/2411.12517v2
- Date: Thu, 28 Nov 2024 09:24:06 GMT
- Title: The Hermeneutic Turn of AI: Are Machines Capable of Interpreting?
- Authors: Remy Demichelis,
- Abstract summary: This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks)<n>It also addresses the philosophical tradition of hermeneutics to highlight a parallel with this movement and to demystify the idea of human-like AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks), not only in terms of techniques but also in our interactions with machines. It also addresses the philosophical tradition of hermeneutics (Don Ihde, Wilhelm Dilthey) to highlight a parallel with this movement and to demystify the idea of human-like AI.
Related papers
- What Does 'Human-Centred AI' Mean? [0.0]
AI is usefully seen as a relationship between technology and humans.<n>All AI implicates human cognition; no matter what.<n>To even begin to de-fetishise AI, we must look the human-in-the-loop in the eyes.
arXiv Detail & Related papers (2025-07-26T14:18:52Z) - Reflections on "Can AI Understand Our Universe?" [3.19428095493284]
It focuses on two concepts of understanding: intuition and causality, and highlights three AI technologies: Transformers, chain-of-thought reasoning, and multimodal processing.
We anticipate that in principle AI could form understanding, with these technologies representing promising advancements.
arXiv Detail & Related papers (2025-01-29T09:24:47Z) - Explaining Explaining [0.882727051273924]
Explanation is key to people having confidence in high-stakes AI systems.
Machine-learning-based systems can't explain because they are usually black boxes.
We describe a hybrid approach to developing cognitive agents.
arXiv Detail & Related papers (2024-09-26T16:55:44Z) - Making AI Intelligible: Philosophical Foundations [0.0]
'Making AI Intelligible' shows that philosophical work on the metaphysics of meaning can help answer these questions.
Author: The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications.
arXiv Detail & Related papers (2024-06-12T12:25:04Z) - Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience [4.524832437237367]
Inner Interpretability is a promising field tasked with uncovering the inner mechanisms of AI systems.
Recent critiques raise issues that question its usefulness to advance the broader goals of AI.
Here we draw the relevant connections and highlight lessons that can be transferred productively between fields.
arXiv Detail & Related papers (2024-06-03T14:16:56Z) - A Review on Objective-Driven Artificial Intelligence [0.0]
Humans have an innate ability to understand context, nuances, and subtle cues in communication.
Humans possess a vast repository of common-sense knowledge that helps us make logical inferences and predictions about the world.
Machines lack this innate understanding and often struggle with making sense of situations that humans find trivial.
arXiv Detail & Related papers (2023-08-20T02:07:42Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Crossing the Tepper Line: An Emerging Ontology for Describing the
Dynamic Sociality of Embodied AI [0.9176056742068814]
We show how embodied AI can manifest as "socially embodied AI"
We define this as the state that embodied AI "circumstantially" take on within interactive contexts when perceived as both social and agentic by people.
arXiv Detail & Related papers (2021-03-15T00:45:44Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.