How Human is AI? Examining the Impact of Emotional Prompts on Artificial and Human and Responsiveness
- URL: http://arxiv.org/abs/2601.05104v1
- Date: Thu, 08 Jan 2026 16:50:00 GMT
- Title: How Human is AI? Examining the Impact of Emotional Prompts on Artificial and Human and Responsiveness
- Authors: Florence Bernays, Marco Henriques Pereira, Jochen Menges,
- Abstract summary: This research examines how the emotional tone of human-AI interactions shapes ChatGPT and human behavior.<n>We asked participants to express an emotion while working with ChatGPT on two tasks, including writing a public response and addressing an ethical dilemma.<n>We found that compared to interactions where participants maintained a neutral tone, ChatGPT showed greater improvement in its answers when participants praised it.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research examines how the emotional tone of human-AI interactions shapes ChatGPT and human behavior. In a between-subject experiment, we asked participants to express a specific emotion while working with ChatGPT (GPT-4.0) on two tasks, including writing a public response and addressing an ethical dilemma. We found that compared to interactions where participants maintained a neutral tone, ChatGPT showed greater improvement in its answers when participants praised ChatGPT for its responses. Expressing anger towards ChatGPT also led to a higher albeit smaller improvement relative to the neutral condition, whereas blaming ChatGPT did not improve its answers. When addressing an ethical dilemma, ChatGPT prioritized corporate interests less when participants expressed anger towards it, while blaming increases its emphasis on protecting the public interest. Additionally, we found that people used more negative, hostile, and disappointing expressions in human-human communication after interactions during which participants blamed rather than praised for their responses. Together, our findings demonstrate that the emotional tone people apply in human-AI interactions not only shape ChatGPT's outputs but also carry over into subsequent human-human communication.
Related papers
- Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search [8.544772506500188]
We examined user perceptions regarding trust, human-likeness (anthropomorphism), and design preferences between ChatGPT and Google.<n>Our analysis identified two distinct user groups: those who use both ChatGPT and Google daily (DUB), and those who primarily rely on Google (DUG)<n>The DUB group exhibited higher trust in ChatGPT, perceiving it as more human-like, and expressed greater willingness to trade factual accuracy for enhanced personalization and conversational flow.
arXiv Detail & Related papers (2025-11-09T16:28:55Z) - Investigating Affective Use and Emotional Well-being on ChatGPT [32.797983866308755]
We investigate the extent to which interactions with ChatGPT may impact users' emotional well-being, behaviors and experiences.<n>We analyze over 3 million conversations for affective cues and surveying over 4,000 users on their perceptions of ChatGPT.<n>We conduct an Institutional Review Board (IRB)-approved randomized controlled trial (RCT) on close to 1,000 participants over 28 days.
arXiv Detail & Related papers (2025-04-04T19:22:10Z) - Is ChatGPT More Empathetic than Humans? [14.18033127602866]
We employ a rigorous evaluation methodology to evaluate the level of empathy in responses generated by humans and ChatGPT.
Our findings indicate that the average empathy rating of responses generated by ChatGPT exceeds those crafted by humans by approximately 10%.
instructing ChatGPT to incorporate a clear understanding of empathy in its responses makes the responses align approximately 5 times more closely with the expectations of individuals possessing a high degree of empathy.
arXiv Detail & Related papers (2024-02-22T09:52:45Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Exploring ChatGPT's Empathic Abilities [0.138120109831448]
This study investigates the extent to which ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional expressions.
In 91.7% of the cases, ChatGPT was able to correctly identify emotions and produces appropriate answers.
In conversations, ChatGPT reacted with a parallel emotion in 70.7% of cases.
arXiv Detail & Related papers (2023-08-07T12:23:07Z) - Deceptive AI Ecosystems: The Case of ChatGPT [8.128368463580715]
ChatGPT has gained popularity for its capability in generating human-like responses.
This paper investigates how ChatGPT operates in the real world where societal pressures influence its development and deployment.
We examine the ethical challenges stemming from ChatGPT's deceptive human-like interactions.
arXiv Detail & Related papers (2023-06-18T10:36:19Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - Consistency Analysis of ChatGPT [65.268245109828]
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour.
Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions.
arXiv Detail & Related papers (2023-03-11T01:19:01Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - Towards Persona-Based Empathetic Conversational Models [58.65492299237112]
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains.
In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy.
We propose a new task towards persona-based empathetic conversations and present the first empirical study on the impact of persona on empathetic responding.
arXiv Detail & Related papers (2020-04-26T08:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.