The Metaphysics We Train: A Heideggerian Reading of Machine Learning
- URL: http://arxiv.org/abs/2602.19028v2
- Date: Tue, 24 Feb 2026 13:11:53 GMT
- Title: The Metaphysics We Train: A Heideggerian Reading of Machine Learning
- Authors: Heman Shakeri,
- Abstract summary: We argue that this philosophical lens reveals three insights invisible to purely technical analysis.<n>Second, even sophisticated technical advances remain within the regime of Gestell (Enframing)<n>Third, AI's lack of existential structure, specifically the absence of Care (Sorge) is genuinely explanatory.
- Score: 0.130536490219656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper offers a phenomenological reading of contemporary machine learning through Heideggerian concepts, aimed at enriching practitioners' reflexive understanding of their own practice. We argue that this philosophical lens reveals three insights invisible to purely technical analysis. First, the algorithmic Entwurf (projection) is distinctive in being automated, opaque, and emergent--a metaphysics that operates without explicit articulation or debate, crystallizing implicitly through gradient descent rather than theoretical argument. Second, even sophisticated technical advances remain within the regime of Gestell (Enframing), improving calculation without questioning the primacy of calculation itself. Third, AI's lack of existential structure, specifically the absence of Care (Sorge), is genuinely explanatory: it illuminates why AI systems have no internal resources for questioning their own optimization imperatives, and why they optimize without the anxiety (Angst) that signals, in human agents, the friction between calculative absorption and authentic existence. We conclude by exploring the pedagogical value of this perspective, arguing that data science education should cultivate not only technical competence but ontological literacy--the capacity to recognize what worldviews our tools enact and when calculation itself may be the wrong mode of engagement.
Related papers
- The Vibe-Automation of Automation: A Proactive Education Framework for Computer Science in the Age of Generative AI [0.7252027234425333]
generative artificial intelligence (GenAI) represents a qualitative shift in computer science.<n>GenAI operates by navigating contextual, semantic, and coherence rather than optimizing predefined objective metrics.<n>The paper proposes a conceptual framework structured across three analytical levels and three domains of action.
arXiv Detail & Related papers (2026-02-09T06:02:04Z) - What Understanding Means in AI-Laden Astronomy [0.20336617819227906]
Artificial intelligence is rapidly transforming astronomical research.<n>This article argues that philosophy of science offers essential tools for navigating AI's integration into astronomy.<n>We propose "pragmatic understanding" as a framework for integration--recognizing AI as a tool that extends human cognition.
arXiv Detail & Related papers (2026-01-15T03:28:38Z) - Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI [0.0]
We argue that artificial general intelligence cannot emerge from current neural network paradigms regardless of scale.<n>We propose a framework distinguishing existential facilities (computational substrate) from architectural organization.
arXiv Detail & Related papers (2025-11-23T16:18:13Z) - Aligning Perception, Reasoning, Modeling and Interaction: A Survey on Physical AI [57.44526951497041]
We advocate for intelligent systems that ground learning in both physical principles and embodied reasoning processes.<n>Our synthesis envisions next-generation world models capable of explaining physical phenomena and predicting future states.
arXiv Detail & Related papers (2025-10-06T16:16:03Z) - Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - Digital Gene: Learning about the Physical World through Analytic Concepts [54.21005370169846]
AI systems still struggle when it comes to understanding and interacting with the physical world.<n>This research introduces the idea of analytic concept.<n>It provides machine intelligence a portal to perceive, reason about, and interact with the physical world.
arXiv Detail & Related papers (2025-04-05T13:22:11Z) - Brain-inspired Computational Intelligence via Predictive Coding [73.42407863671565]
Predictive coding (PC) has shown promising properties that make it potentially valuable for the machine learning community.<n>PC-like algorithms are starting to be present in multiple sub-fields of machine learning and AI at large.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Is it possible not to cheat on the Turing Test: Exploring the potential
and challenges for true natural language 'understanding' by computers [0.0]
The area of natural language understanding in artificial intelligence claims to have been making great strides.
A comprehensive, interdisciplinary overview of current approaches and remaining challenges is yet to be carried out.
I unite all of these perspectives to unpack the challenges involved in reaching true (human-like) language understanding.
arXiv Detail & Related papers (2022-06-29T14:19:48Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Mathematical Framework for Consciousness in Neural Networks [0.0]
This paper presents a novel mathematical framework for bridging the explanatory gap between consciousness and its physical correlates.<n>We do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do.<n>We establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information.
arXiv Detail & Related papers (2017-04-04T18:32:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.