Banal Deception Human-AI Ecosystems: A Study of People's Perceptions of LLM-generated Deceptive Behaviour
- URL: http://arxiv.org/abs/2406.08386v1
- Date: Wed, 12 Jun 2024 16:36:06 GMT
- Title: Banal Deception Human-AI Ecosystems: A Study of People's Perceptions of LLM-generated Deceptive Behaviour
- Authors: Xiao Zhan, Yifan Xu, Noura Abdi, Joe Collenette, Ruba Abu-Salma, Stefan Sarkadi,
- Abstract summary: Large language models (LLMs) can provide users with false, inaccurate, or misleading information.
We investigate peoples' perceptions of ChatGPT-generated deceptive behaviour.
- Score: 11.285775969393566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) can provide users with false, inaccurate, or misleading information, and we consider the output of this type of information as what Natale (2021) calls `banal' deceptive behaviour. Here, we investigate peoples' perceptions of ChatGPT-generated deceptive behaviour and how this affects peoples' own behaviour and trust. To do this, we use a mixed-methods approach comprising of (i) an online survey with 220 participants and (ii) semi-structured interviews with 12 participants. Our results show that (i) the most common types of deceptive information encountered were over-simplifications and outdated information; (ii) humans' perceptions of trust and `worthiness' of talking to ChatGPT are impacted by `banal' deceptive behaviour; (iii) the perceived responsibility for deception is influenced by education level and the frequency of deceptive information; and (iv) users become more cautious after encountering deceptive information, but they come to trust the technology more when they identify advantages of using it. Our findings contribute to the understanding of human-AI interaction dynamics in the context of \textit{Deceptive AI Ecosystems}, and highlight the importance of user-centric approaches to mitigating the potential harms of deceptive AI technologies.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild [40.57348900292574]
Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users' AI literacy.
We run an extensive, fine-grained analysis on the personal disclosures made by real users to commercial GPT models.
arXiv Detail & Related papers (2024-07-16T07:05:31Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive
Learning [71.8876256714229]
We propose an entity-based contrastive learning framework for improving the robustness of knowledge-grounded dialogue systems.
Our method achieves new state-of-the-art performance in terms of automatic evaluation scores.
arXiv Detail & Related papers (2024-01-09T05:16:52Z) - Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated
Content [0.8602553195689513]
This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by large language models.
Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility.
Participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content.
arXiv Detail & Related papers (2023-09-05T18:29:29Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - The Effects of Interactive AI Design on User Behavior: An Eye-tracking
Study of Fact-checking COVID-19 Claims [12.00747200817161]
We conducted a lab-based eye-tracking study to investigate how the interactivity of an AI-powered fact-checking system affects user interactions.
We found that the presence of interactively manipulating the AI system's prediction parameters affected users' dwell times, and eye-fixations on AOIs, but not mental workload.
arXiv Detail & Related papers (2022-02-17T21:08:57Z) - Characterizing User Susceptibility to COVID-19 Misinformation on Twitter [40.0762273487125]
This study attempts to answer it who constitutes the population vulnerable to the online misinformation in the pandemic.
We distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation.
We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation.
arXiv Detail & Related papers (2021-09-20T13:31:15Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.