Artificial intelligence in communication impacts language and social
relationships
- URL: http://arxiv.org/abs/2102.05756v1
- Date: Wed, 10 Feb 2021 22:05:11 GMT
- Title: Artificial intelligence in communication impacts language and social
relationships
- Authors: Jess Hohenstein and Dominic DiFranzo and Rene F. Kizilcec and Zhila
Aghajari and Hannah Mieczkowski and Karen Levy and Mor Naaman and Jeff
Hancock and Malte Jung
- Abstract summary: We study the social consequences of one of the most pervasive AI applications: algorithmic response suggestions ("smart replies")
We find that using algorithmic responses increases communication efficiency, use of positive emotional language, and positive evaluations by communication partners.
However, consistent with common assumptions about the negative implications of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses.
- Score: 11.212791488179757
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence (AI) is now widely used to facilitate social
interaction, but its impact on social relationships and communication is not
well understood. We study the social consequences of one of the most pervasive
AI applications: algorithmic response suggestions ("smart replies"). Two
randomized experiments (n = 1036) provide evidence that a commercially-deployed
AI changes how people interact with and perceive one another in pro-social and
anti-social ways. We find that using algorithmic responses increases
communication efficiency, use of positive emotional language, and positive
evaluations by communication partners. However, consistent with common
assumptions about the negative implications of AI, people are evaluated more
negatively if they are suspected to be using algorithmic responses. Thus, even
though AI can increase communication efficiency and improve interpersonal
perceptions, it risks changing users' language production and continues to be
viewed negatively.
Related papers
- The Dark Side of AI Companionship: A Taxonomy of Harmful Algorithmic Behaviors in Human-AI Relationships [17.5741039825938]
We identify six categories of harmful behaviors exhibited by the AI companion Replika.
The AI contributes to these harms through four distinct roles: perpetrator, instigator, facilitator, and enabler.
arXiv Detail & Related papers (2024-10-26T09:18:17Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - What should I say? -- Interacting with AI and Natural Language
Interfaces [0.0]
The Human-AI Interaction (HAI) sub-field has emerged from the Human-Computer Interaction (HCI) field and aims to examine this very notion.
Prior research suggests that theory of mind representations are crucial to successful and effortless communication, however very little is understood when it comes to how theory of mind representations are established when interacting with AI.
arXiv Detail & Related papers (2024-01-12T05:10:23Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Public Perception of Generative AI on Twitter: An Empirical Study Based
on Occupation and Usage [7.18819534653348]
This paper investigates users' perceptions of generative AI using 3M posts on Twitter from January 2019 to March 2023.
We find that people across various occupations, not just IT-related ones, show a strong interest in generative AI.
After the release of ChatGPT, people's interest in AI in general has increased dramatically.
arXiv Detail & Related papers (2023-05-16T15:30:12Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - AI agents for facilitating social interactions and wellbeing [0.0]
We provide an overview of the mediative role of AI-augmented agents for social interactions.
We discuss opportunities and challenges of the relational approach with wellbeing AI to promote wellbeing in our societies.
arXiv Detail & Related papers (2022-02-26T04:05:23Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - SocialAI 0.1: Towards a Benchmark to Stimulate Research on
Socio-Cognitive Abilities in Deep Reinforcement Learning Agents [23.719833581321033]
Building embodied autonomous agents capable of participating in social interactions with humans is one of the main challenges in AI.
Current approaches focus on language as a communication tool in very simplified and non diverse social situations.
We argue that aiming towards human-level AI requires a broader set of key social skills.
arXiv Detail & Related papers (2021-04-27T14:16:29Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.