"i am a stochastic parrot, and so r u": Is AI-based framing of human behaviour and cognition a conceptual metaphor or conceptual engineering?
- URL: http://arxiv.org/abs/2504.07756v1
- Date: Thu, 10 Apr 2025 13:55:32 GMT
- Title: "i am a stochastic parrot, and so r u": Is AI-based framing of human behaviour and cognition a conceptual metaphor or conceptual engineering?
- Authors: Warmhold Jan Thomas Mollema, Thomas Wachter,
- Abstract summary: We ask: can the conceptual constellation of the computational and AI be applied to the human domain?<n>We argue that it most importantly is a misleading 'double metaphor' because of the metaphorical connection between human computation and psychology.<n>The perspective of the conceptual metaphors shows avenues for forms of conceptual engineering.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Given the massive integration of AI technologies into our daily lives, AI-related concepts are being used to metaphorically compare AI systems with human behaviour and/or cognitive abilities like language acquisition. Rightfully, the epistemic success of these metaphorical comparisons should be debated. Against the backdrop of the conflicting positions of the 'computational' and 'meat' chauvinisms, we ask: can the conceptual constellation of the computational and AI be applied to the human domain and what does it mean to do so? What is one doing when the conceptual constellations of AI in particular are used in this fashion? Rooted in a Wittgensteinian view of concepts and language-use, we consider two possible answers and pit them against each other: either these examples are conceptual metaphors, or they are attempts at conceptual engineering. We argue that they are conceptual metaphors, but that (1) this position is unaware of its own epistemological contingency, and (2) it risks committing the ''map-territory fallacy''. Down at the conceptual foundations of computation, (3) it most importantly is a misleading 'double metaphor' because of the metaphorical connection between human psychology and computation. In response to the shortcomings of this projected conceptual organisation of AI onto the human domain, we argue that there is a semantic catch. The perspective of the conceptual metaphors shows avenues for forms of conceptual engineering. If this methodology's criteria are met, the fallacies and epistemic shortcomings related to the conceptual metaphor view can be bypassed. At its best, the cross-pollution of the human and AI conceptual domains is one that prompts us to reflect anew on how the boundaries of our current concepts serve us and how they could be approved.
Related papers
- Towards properly implementing Theory of Mind in AI systems: An account of four misconceptions [1.249418440326334]
We identify four common misconceptions around theory of mind (ToM)
These misconceptions should be taken into account when developing an AI system.
After discussing the misconception, we end each section by providing tentative guidelines on how the misconception can be overcome.
arXiv Detail & Related papers (2025-02-28T19:12:35Z) - We Can't Understand AI Using our Existing Vocabulary [22.352112061625768]
We argue that in order to understand AI, we cannot rely on our existing vocabulary of human words.<n>We should strive to develop neologisms: new words that represent precise human concepts that we want to teach machines.
arXiv Detail & Related papers (2025-02-11T14:34:05Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
Large language models (LLMs) trained exclusively through next-token prediction over language data exhibit remarkably human-like behaviors.<n>Are these models developing concepts akin to humans, and if so, how are such concepts represented and organized?<n>Our results demonstrate that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.<n>These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - A.I. go by many names: towards a sociotechnical definition of artificial intelligence [0.0]
Defining artificial intelligence (AI) is a persistent challenge, often muddied by technical ambiguity and varying interpretations.
This essay makes a case for a sociotechnical definition of AI, which is essential for researchers who require clarity in their work.
arXiv Detail & Related papers (2024-10-17T11:25:50Z) - Science is Exploration: Computational Frontiers for Conceptual Metaphor Theory [0.0]
We show that Large Language Models (LLMs) can accurately identify and explain the presence of conceptual metaphors in natural language data.
Using a novel prompting technique based on metaphor annotation guidelines, we demonstrate that LLMs are a promising tool for large-scale computational research on conceptual metaphors.
arXiv Detail & Related papers (2024-10-11T17:03:13Z) - Assistant, Parrot, or Colonizing Loudspeaker? ChatGPT Metaphors for
Developing Critical AI Literacies [0.9012198585960443]
This study explores how discussing metaphors for AI can help build awareness of the frames that shape our understanding of AI systems.
We analyzed metaphors from a range of sources, and reflected on them individually according to seven questions.
We explored each metaphor along the dimension whether or not it was promoting anthropomorphizing, and to what extent such metaphors imply that AI is sentient.
arXiv Detail & Related papers (2024-01-15T15:15:48Z) - Evaluating Understanding on Conceptual Abstraction Benchmarks [0.0]
A long-held objective in AI is to build systems that understand concepts in a humanlike way.
We argue that understanding a concept requires the ability to use it in varied contexts.
Our concept-based approach to evaluation reveals information about AI systems that conventional test sets would have left hidden.
arXiv Detail & Related papers (2022-06-28T17:52:46Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Metaphor Generation with Conceptual Mappings [58.61307123799594]
We aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.
We propose to control the generation process by encoding conceptual mappings between cognitive domains.
We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems.
arXiv Detail & Related papers (2021-06-02T15:27:05Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.