What Do You Mean? Exploring How Humans and AI Interact with Symbols and Meanings in Their Interactions
- URL: http://arxiv.org/abs/2510.05378v1
- Date: Mon, 06 Oct 2025 21:13:22 GMT
- Title: What Do You Mean? Exploring How Humans and AI Interact with Symbols and Meanings in Their Interactions
- Authors: Reza Habibi, Seung Wan Ha, Zhiyu Lin, Atieh Kashani, Ala Shafia, Lakshana Lakshmanarajan, Chia-Fang Chung, Magy Seif El-Nasr,
- Abstract summary: We investigated how humans and AI interact with symbols and co-construct their meanings.<n>When AI introduced conflicting meanings and symbols in social contexts, 63% of participants reshaped their definitions.<n>This suggests that conflicts in symbols and meanings prompt reflection and redefinition, allowing both participants and AI to have a better shared understanding of meanings and symbols.
- Score: 18.555844555619178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meaningful human-AI collaboration requires more than processing language, it demands a better understanding of symbols and their constructed meanings. While humans naturally interpret symbols through social interaction, AI systems treat them as patterns with compressed meanings, missing the dynamic meanings that emerge through conversation. Drawing on symbolic interactionism theory, we conducted two studies (N=37) investigated how humans and AI interact with symbols and co-construct their meanings. When AI introduced conflicting meanings and symbols in social contexts, 63% of participants reshaped their definitions. This suggests that conflicts in symbols and meanings prompt reflection and redefinition, allowing both participants and AI to have a better shared understanding of meanings and symbols. This work reveals that shared understanding emerges not from agreement but from the reciprocal exchange and reinterpretation of symbols, suggesting new paradigms for human-AI interaction design.
Related papers
- Revealing emergent human-like conceptual representations from language prediction [90.73285317321312]
Large language models (LLMs) trained solely through next-token prediction on text exhibit strikingly human-like behaviors.<n>Are these models developing concepts akin to those of humans?<n>We found that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Intersymbolic AI: Interlinking Symbolic AI and Subsymbolic AI [3.20902205123321]
Intersymbolic AI combines symbolic and subsymbolic AI to increase the effectiveness of AI.
The way Intersymbolic AI combines both symbolic and subsymbolic AI is likened to the way that the combination of both conscious and subconscious thought increases the effectiveness of human thought.
arXiv Detail & Related papers (2024-06-17T14:01:59Z) - Position: Towards Bidirectional Human-AI Alignment [109.57781720848669]
We argue that the research community should explicitly define and critically reflect on "alignment" to account for the bidirectional and dynamic relationship between humans and AI.<n>We introduce the Bidirectional Human-AI Alignment framework, which not only incorporates traditional efforts to align AI with human values but also introduces the critical, underexplored dimension of aligning humans with AI.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - What should I say? -- Interacting with AI and Natural Language
Interfaces [0.0]
The Human-AI Interaction (HAI) sub-field has emerged from the Human-Computer Interaction (HCI) field and aims to examine this very notion.
Prior research suggests that theory of mind representations are crucial to successful and effortless communication, however very little is understood when it comes to how theory of mind representations are established when interacting with AI.
arXiv Detail & Related papers (2024-01-12T05:10:23Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable
and Advisable AI Systems [21.314210696069495]
We argue that the need for (human-understandable) symbols in human-AI interaction seems quite compelling.
In particular, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind.
This alone requires AI systems to at least do their I/O in symbolic terms.
arXiv Detail & Related papers (2021-09-21T01:30:06Z) - Philosophical Specification of Empathetic Ethical Artificial
Intelligence [0.0]
An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, and infer intent.
We use enactivism, semiotics, perceptual symbol systems and symbol emergence to specify an agent.
It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal.
arXiv Detail & Related papers (2021-07-22T14:37:46Z) - Symbolic Behaviour in Artificial Intelligence [8.849576130278157]
We argue that the path towards symbolically fluent AI begins with a reinterpretation of what symbols are.
We then outline how this interpretation unifies the behavioural traits humans exhibit when they use symbols.
We suggest that AI research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behaviour to emerge.
arXiv Detail & Related papers (2021-02-05T20:07:14Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.