Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search
- URL: http://arxiv.org/abs/2511.06447v1
- Date: Sun, 09 Nov 2025 16:28:55 GMT
- Title: Personality over Precision: Exploring the Influence of Human-Likeness on ChatGPT Use for Search
- Authors: Mert Yazan, Frederik Bungaran Ishak Situmeang, Suzan Verberne,
- Abstract summary: We examined user perceptions regarding trust, human-likeness (anthropomorphism), and design preferences between ChatGPT and Google.<n>Our analysis identified two distinct user groups: those who use both ChatGPT and Google daily (DUB), and those who primarily rely on Google (DUG)<n>The DUB group exhibited higher trust in ChatGPT, perceiving it as more human-like, and expressed greater willingness to trade factual accuracy for enhanced personalization and conversational flow.
- Score: 8.544772506500188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conversational search interfaces, like ChatGPT, offer an interactive, personalized, and engaging user experience compared to traditional search. On the downside, they are prone to cause overtrust issues where users rely on their responses even when they are incorrect. What aspects of the conversational interaction paradigm drive people to adopt it, and how it creates personalized experiences that lead to overtrust, is not clear. To understand the factors influencing the adoption of conversational interfaces, we conducted a survey with 173 participants. We examined user perceptions regarding trust, human-likeness (anthropomorphism), and design preferences between ChatGPT and Google. To better understand the overtrust phenomenon, we asked users about their willingness to trade off factuality for constructs like ease of use or human-likeness. Our analysis identified two distinct user groups: those who use both ChatGPT and Google daily (DUB), and those who primarily rely on Google (DUG). The DUB group exhibited higher trust in ChatGPT, perceiving it as more human-like, and expressed greater willingness to trade factual accuracy for enhanced personalization and conversational flow. Conversely, the DUG group showed lower trust toward ChatGPT but still appreciated aspects like ad-free experiences and responsive interactions. Demographic analysis further revealed nuanced patterns, with middle-aged adults using ChatGPT less frequently yet trusting it more, suggesting potential vulnerability to misinformation. Our findings contribute to understanding user segmentation, emphasizing the critical roles of personalization and human-likeness in conversational IR systems, and reveal important implications regarding users' willingness to compromise factual accuracy for more engaging interactions.
Related papers
- How Human is AI? Examining the Impact of Emotional Prompts on Artificial and Human and Responsiveness [0.0]
This research examines how the emotional tone of human-AI interactions shapes ChatGPT and human behavior.<n>We asked participants to express an emotion while working with ChatGPT on two tasks, including writing a public response and addressing an ethical dilemma.<n>We found that compared to interactions where participants maintained a neutral tone, ChatGPT showed greater improvement in its answers when participants praised it.
arXiv Detail & Related papers (2026-01-08T16:50:00Z) - Understanding Privacy Norms Around LLM-Based Chatbots: A Contextual Integrity Perspective [14.179623604712065]
We conduct a survey experiment with 300 US ChatGPT users to understand emerging privacy norms for sharing ChatGPT data.<n>Our findings reveal a stark disconnect between user concerns and behavior.<n>Participants uniformly rejected sharing personal data for improved services, even in exchange for premium features worth $200.
arXiv Detail & Related papers (2025-08-09T00:22:46Z) - Blending Queries and Conversations: Understanding Tactics, Trust, Verification, and System Choice in Web Search and Chat Interactions [0.8397730500554048]
This paper presents a user study where participants used an interface combining Web Search and a Generative AI-Chat feature to solve health-related information tasks.<n>We study how people behaved with the interface, why they behaved in certain ways, and what the outcomes of these behaviours were.
arXiv Detail & Related papers (2025-04-07T14:59:55Z) - Investigating Affective Use and Emotional Well-being on ChatGPT [32.797983866308755]
We investigate the extent to which interactions with ChatGPT may impact users' emotional well-being, behaviors and experiences.<n>We analyze over 3 million conversations for affective cues and surveying over 4,000 users on their perceptions of ChatGPT.<n>We conduct an Institutional Review Board (IRB)-approved randomized controlled trial (RCT) on close to 1,000 participants over 28 days.
arXiv Detail & Related papers (2025-04-04T19:22:10Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating
hateful, offensive, and toxic comments on social media [2.105577305992576]
Generative AI models have the potential to understand and detect harmful content.
ChatGPT can achieve an accuracy of approximately 80% when compared to human annotations.
arXiv Detail & Related papers (2023-04-20T19:40:51Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - Consistency Analysis of ChatGPT [65.268245109828]
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour.
Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions.
arXiv Detail & Related papers (2023-03-11T01:19:01Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z) - How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation,
and Detection [8.107721810172112]
ChatGPT is able to respond effectively to a wide range of human questions.
People are starting to worry about the potential negative impacts that large language models (LLMs) like ChatGPT could have on society.
In this work, we collected tens of thousands of comparison responses from both human experts and ChatGPT.
arXiv Detail & Related papers (2023-01-18T15:23:25Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.