Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated
Content
- URL: http://arxiv.org/abs/2309.02524v1
- Date: Tue, 5 Sep 2023 18:29:29 GMT
- Title: Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated
Content
- Authors: Martin Huschens, Martin Briesch, Dominik Sobania, Franz Rothlauf
- Abstract summary: This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by large language models.
Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility.
Participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content.
- Score: 0.8602553195689513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper examines how individuals perceive the credibility of content
originating from human authors versus content generated by large language
models, like the GPT language model family that powers ChatGPT, in different
user interface versions. Surprisingly, our results demonstrate that regardless
of the user interface presentation, participants tend to attribute similar
levels of credibility. While participants also do not report any different
perceptions of competence and trustworthiness between human and AI-generated
content, they rate AI-generated content as being clearer and more engaging. The
findings from this study serve as a call for a more discerning approach to
evaluating information sources, encouraging users to exercise caution and
critical thinking when engaging with content generated by AI systems.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Banal Deception Human-AI Ecosystems: A Study of People's Perceptions of LLM-generated Deceptive Behaviour [11.285775969393566]
Large language models (LLMs) can provide users with false, inaccurate, or misleading information.
We investigate peoples' perceptions of ChatGPT-generated deceptive behaviour.
arXiv Detail & Related papers (2024-06-12T16:36:06Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - RELIC: Investigating Large Language Model Responses using Self-Consistency [58.63436505595177]
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations.
We propose an interactive system that helps users gain insight into the reliability of the generated text.
arXiv Detail & Related papers (2023-11-28T14:55:52Z) - DEMASQ: Unmasking the ChatGPT Wordsmith [63.8746084667206]
We propose an effective ChatGPT detector named DEMASQ, which accurately identifies ChatGPT-generated content.
Our method addresses two critical factors: (i) the distinct biases in text composition observed in human- and machine-generated content and (ii) the alterations made by humans to evade previous detection methods.
arXiv Detail & Related papers (2023-11-08T21:13:05Z) - Perceived Trustworthiness of Natural Language Generators [0.0]
The paper addresses the problem of understanding how different users perceive and adopt Natural Language Generation tools.
It also discusses the perceived advantages and limitations of Natural Language Generation tools.
The paper sheds light on how different user characteristics shape their beliefs on the quality and overall trustworthiness of machine-generated text.
arXiv Detail & Related papers (2023-05-29T16:09:58Z) - "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating
hateful, offensive, and toxic comments on social media [2.105577305992576]
Generative AI models have the potential to understand and detect harmful content.
ChatGPT can achieve an accuracy of approximately 80% when compared to human annotations.
arXiv Detail & Related papers (2023-04-20T19:40:51Z) - A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective [0.0]
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
arXiv Detail & Related papers (2023-04-18T07:58:09Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - Consistency Analysis of ChatGPT [65.268245109828]
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour.
Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions.
arXiv Detail & Related papers (2023-03-11T01:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.