Social AI and The Equation of Wittgenstein's Language User With Calvino's Literature Machine
- URL: http://arxiv.org/abs/2407.09493v1
- Date: Thu, 23 May 2024 09:51:44 GMT
- Title: Social AI and The Equation of Wittgenstein's Language User With Calvino's Literature Machine
- Authors: W. J. T. Mollema,
- Abstract summary: Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)?
Social AIs are not full-blown language users, but rather more like Italo Calvino's literature machines.
The framework of mortal computation is used to show that social AIs lack the basic autopoiesis needed for narrative faccons de parler.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI ('affective artificial agents'), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein's sense but that ascribing psychological predicates to these systems remains a functionalist temptation. Social AIs are not full-blown language users, but rather more like Italo Calvino's literature machines. The ideas of LLMs as Wittgensteinian language users and Calvino's literature-producing writing machine are combined. This sheds light on the misguided functionalist temptation inherent in moving from equating the two to the ascription of psychological predicates to social AI. Finally, the framework of mortal computation is used to show that social AIs lack the basic autopoiesis needed for narrative fa\c{c}ons de parler and their role in the sensemaking of human (inter)action. Such psychological predicate ascriptions could make sense: the transition 'from quantity to quality' can take place, but its route lies somewhere between life and death, not between affective artifacts and emotion approximation by literature machines.
Related papers
- AI shares emotion with humans across languages and cultures [12.530921452568291]
We assess human-AI emotional alignment across linguistic-cultural groups and model-families.<n>Our analyses reveal that LLM-derived emotion spaces are structurally congruent with human perception.<n>We show that model expressions can be stably and naturally modulated across distinct emotion categories.
arXiv Detail & Related papers (2025-06-11T14:42:30Z) - SocialEval: Evaluating Social Intelligence of Large Language Models [70.90981021629021]
Social Intelligence (SI) equips humans with interpersonal abilities to behave wisely in navigating social interactions to achieve social goals.<n>This presents an operational evaluation paradigm: outcome-oriented goal achievement evaluation and process-oriented interpersonal ability evaluation.<n>We propose SocialEval, a script-based bilingual SI benchmark, integrating outcome- and process-oriented evaluation by manually crafting narrative scripts.
arXiv Detail & Related papers (2025-06-01T08:36:51Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - Understanding Natural Language Understanding Systems. A Critical
Analysis [91.81211519327161]
The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
arXiv Detail & Related papers (2023-03-01T08:32:55Z) - Human Heuristics for AI-Generated Language Are Flawed [8.465228064780744]
We study whether verbal self-presentations, one of the most personal and consequential forms of language, were generated by AI.
We experimentally demonstrate that these wordings make human judgment of AI-generated language predictable and manipulable.
We discuss solutions, such as AI accents, to reduce the deceptive potential of language generated by AI.
arXiv Detail & Related papers (2022-06-15T03:18:56Z) - An Enactivist account of Mind Reading in Natural Language Understanding [0.0]
We apply our understanding of the radical enactivist agenda to a classic AI-hard problem.
The Turing Test assumed that the computer could use language and the challenge was to fake human intelligence.
This paper look again at how natural language understanding might actually work between humans.
arXiv Detail & Related papers (2021-11-11T12:46:00Z) - Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
and Advisable AI Systems [21.314210696069495]
We argue that the need for (human-understandable) symbols in human-AI interaction seems quite compelling.
In particular, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind.
This alone requires AI systems to at least do their I/O in symbolic terms.
arXiv Detail & Related papers (2021-09-21T01:30:06Z) - Crossing the Tepper Line: An Emerging Ontology for Describing the
Dynamic Sociality of Embodied AI [0.9176056742068814]
We show how embodied AI can manifest as "socially embodied AI"
We define this as the state that embodied AI "circumstantially" take on within interactive contexts when perceived as both social and agentic by people.
arXiv Detail & Related papers (2021-03-15T00:45:44Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Generating Emotionally Aligned Responses in Dialogues using Affect
Control Theory [15.848210524718219]
Affect Control Theory (ACT) is a socio-mathematical model of emotions for human-human interactions.
We investigate how ACT can be used to develop affect-aware neural conversational agents.
arXiv Detail & Related papers (2020-03-07T19:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.