Explainability Through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence
- URL: http://arxiv.org/abs/2507.22197v1
- Date: Tue, 29 Jul 2025 19:50:21 GMT
- Title: Explainability Through Systematicity: The Hard Systematicity Challenge for Artificial Intelligence
- Authors: Matthieu Queloz,
- Abstract summary: This paper argues that explainability is only one facet of a broader ideal that shapes our expectations towards artificial intelligence (AI)<n>I offer a conceptual framework for thinking about "the systematicity of thought" that distinguishes four senses of the phrase.<n>To determine whether we have reason to hold AI models to this ideal of systematicity, I then argue, we must look to the rationales for systematization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper argues that explainability is only one facet of a broader ideal that shapes our expectations towards artificial intelligence (AI). Fundamentally, the issue is to what extent AI exhibits systematicity--not merely in being sensitive to how thoughts are composed of recombinable constituents, but in striving towards an integrated body of thought that is consistent, coherent, comprehensive, and parsimoniously principled. This richer conception of systematicity has been obscured by the long shadow of the "systematicity challenge" to connectionism, according to which network architectures are fundamentally at odds with what Fodor and colleagues termed "the systematicity of thought." I offer a conceptual framework for thinking about "the systematicity of thought" that distinguishes four senses of the phrase. I use these distinctions to defuse the perceived tension between systematicity and connectionism and show that the conception of systematicity that historically shaped our sense of what makes thought rational, authoritative, and scientific is more demanding than the Fodorian notion. To determine whether we have reason to hold AI models to this ideal of systematicity, I then argue, we must look to the rationales for systematization and explore to what extent they transfer to AI models. I identify five such rationales and apply them to AI. This brings into view the "hard systematicity challenge." However, the demand for systematization itself needs to be regulated by the rationales for systematization. This yields a dynamic understanding of the need to systematize thought, which tells us how systematic we need AI models to be and when.
Related papers
- Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - A Trustworthiness-based Metaphysics of Artificial Intelligence Systems [1.0878040851638]
We introduce a theory of metaphysical identity of AI systems.<n>We do so by characterizing their kinds and introducing identity criteria.<n>Our approach suggests that the identity and persistence of AI systems is sensitive to the socio-technical context.
arXiv Detail & Related papers (2025-06-03T15:45:46Z) - SYMBIOSIS: Systems Thinking and Machine Intelligence for Better Outcomes in Society [0.0]
SYMBIOSIS is an AI-powered framework and platform designed to make Systems Thinking accessible for addressing societal challenges.<n>To address this, we developed a generative co-pilot that translates complex systems representations into natural language.<n>SYMBIOSIS aims to serve as a foundational step to unlock future research into responsible and society-centered AI.
arXiv Detail & Related papers (2025-03-07T17:07:26Z) - Agency Is Frame-Dependent [94.91580596320331]
Agency is a system's capacity to steer outcomes toward a goal.<n>We argue that agency is fundamentally frame-dependent.<n>We conclude that any basic science of agency requires frame-dependence.
arXiv Detail & Related papers (2025-02-06T08:34:57Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems [0.0]
There still exists a gap between principles and practices in AI ethics.
One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope.
arXiv Detail & Related papers (2024-07-07T12:16:01Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Evaluating Understanding on Conceptual Abstraction Benchmarks [0.0]
A long-held objective in AI is to build systems that understand concepts in a humanlike way.
We argue that understanding a concept requires the ability to use it in varied contexts.
Our concept-based approach to evaluation reveals information about AI systems that conventional test sets would have left hidden.
arXiv Detail & Related papers (2022-06-28T17:52:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z) - Perspectives and Ethics of the Autonomous Artificial Thinking Systems [0.0]
Our model uses four hierarchies: the hierarchy of information systems, the cognitive hierarchy, the linguistic hierarchy and the digital informative hierarchy.
The question of the capability of autonomous system to provide a form of artificial thought arises with the ethical consequences on the social life and the perspective of transhumanism.
arXiv Detail & Related papers (2020-01-13T14:23:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.