Perspectives on How Sociology Can Advance Theorizing about Human-Chatbot Interaction and Developing Chatbots for Social Good
- URL: http://arxiv.org/abs/2507.05030v1
- Date: Mon, 07 Jul 2025 14:12:03 GMT
- Title: Perspectives on How Sociology Can Advance Theorizing about Human-Chatbot Interaction and Developing Chatbots for Social Good
- Authors: Celeste Campos-Castillo, Xuan Kang, Linnea I. Laestadius,
- Abstract summary: We suggest sociology can advance understanding of human-chatbot interaction.<n>We offer four sociological theories to enhance extant work in this field.<n>We discuss the value of applying sociological theories for advancing theorizing about human-chatbot interaction.
- Score: 0.9831489366502302
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, research into chatbots (also known as conversational agents, AI agents, voice assistants), which are computer applications using artificial intelligence to mimic human-like conversation, has grown sharply. Despite this growth, sociology lags other disciplines (including computer science, medicine, psychology, and communication) in publishing about chatbots. We suggest sociology can advance understanding of human-chatbot interaction and offer four sociological theories to enhance extant work in this field. The first two theories (resource substitution theory, power-dependence theory) add new insights to existing models of the drivers of chatbot use, which overlook sociological concerns about how social structure (e.g., systemic discrimination, the uneven distribution of resources within networks) inclines individuals to use chatbots, including problematic levels of emotional dependency on chatbots. The second two theories (affect control theory, fundamental cause of disease theory) help inform the development of chatbot-driven interventions that minimize safety risks and enhance equity by leveraging sociological insights into how chatbot outputs could attend to cultural contexts (e.g., affective norms) to promote wellbeing and enhance communities (e.g., opportunities for civic participation). We discuss the value of applying sociological theories for advancing theorizing about human-chatbot interaction and developing chatbots for social good.
Related papers
- Exploring the Effects of Chatbot Anthropomorphism and Human Empathy on Human Prosocial Behavior Toward Chatbots [9.230015338626659]
We examine how human-like identity, emotional expression, and non-verbal expression-influences human empathy toward chatbots.<n>We also explore people's own interpretations of their prosocial behaviors toward chatbots.
arXiv Detail & Related papers (2025-06-25T18:16:14Z) - The Human Robot Social Interaction (HSRI) Dataset: Benchmarking Foundational Models' Social Reasoning [49.32390524168273]
Our work aims to advance the social reasoning of embodied artificial intelligence (AI) agents in real-world social interactions.<n>We introduce a large-scale real-world Human Robot Social Interaction (HSRI) dataset to benchmark the capabilities of language models (LMs) and foundational models (FMs)<n>Our dataset consists of 400 real-world human social robot interaction videos and over 10K annotations, detailing the robot's social errors, competencies, rationale, and corrective actions.
arXiv Detail & Related papers (2025-04-07T06:27:02Z) - Social Genome: Grounded Social Reasoning Abilities of Multimodal Models [61.88413918026431]
Social reasoning abilities are crucial for AI systems to interpret and respond to multimodal human communication and interaction within social contexts.<n>We introduce SOCIAL GENOME, the first benchmark for fine-grained, grounded social reasoning abilities of multimodal models.
arXiv Detail & Related papers (2025-02-21T00:05:40Z) - The Three Social Dimensions of Chatbot Technology [0.0]
This study presents a structured examination of chatbots across three societal dimensions.<n>It highlights their roles as objects of scientific research, commercial instruments, and agents of intimate interaction.
arXiv Detail & Related papers (2024-12-16T13:45:53Z) - From Human-to-Human to Human-to-Bot Conversations in Software Engineering [3.1747517745997014]
We aim to understand the dynamics of conversations that occur during modern software development after the integration of AI and chatbots.
We compile existing conversation attributes with humans and NLU-based chatbots and adapt them to the context of software development.
We present similarities and differences between human-to-human and human-to-bot conversations.
We conclude that the recent conversation styles that we observe with LLM-chatbots can not replace conversations with humans.
arXiv Detail & Related papers (2024-05-21T12:04:55Z) - Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions [67.60397632819202]
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal.
We identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI.
arXiv Detail & Related papers (2024-04-17T02:57:42Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - From Learning to Relearning: A Framework for Diminishing Bias in Social
Robot Navigation [3.3511723893430476]
We argue that social navigation models can replicate, promote, and amplify societal unfairness such as discrimination and segregation.
Our proposed framework consists of two components: textitlearning which incorporates social context into the learning process to account for safety and comfort, and textitrelearning to detect and correct potentially harmful outcomes before the onset.
arXiv Detail & Related papers (2021-01-07T17:42:35Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.