The effect of source disclosure on evaluation of AI-generated messages:
A two-part study
- URL: http://arxiv.org/abs/2311.15544v2
- Date: Tue, 28 Nov 2023 02:04:58 GMT
- Title: The effect of source disclosure on evaluation of AI-generated messages:
A two-part study
- Authors: Sue Lim, Ralf Schm\"alzle
- Abstract summary: We examined the influence of source disclosure on people's evaluation of AI-generated health prevention messages.
We found that source disclosure significantly impacted the evaluation of the messages but did not significantly alter message rankings.
For those with moderate levels of negative attitudes towards AI, source disclosure decreased the preference for AI-generated messages.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in artificial intelligence (AI) over the last decade demonstrate
that machines can exhibit communicative behavior and influence how humans
think, feel, and behave. In fact, the recent development of ChatGPT has shown
that large language models (LLMs) can be leveraged to generate high-quality
communication content at scale and across domains, suggesting that they will be
increasingly used in practice. However, many questions remain about how knowing
the source of the messages influences recipients' evaluation of and preference
for AI-generated messages compared to human-generated messages. This paper
investigated this topic in the context of vaping prevention messaging. In Study
1, which was pre-registered, we examined the influence of source disclosure on
people's evaluation of AI-generated health prevention messages compared to
human-generated messages. We found that source disclosure (i.e., labeling the
source of a message as AI vs. human) significantly impacted the evaluation of
the messages but did not significantly alter message rankings. In a follow-up
study (Study 2), we examined how the influence of source disclosure may vary by
the participants' negative attitudes towards AI. We found a significant
moderating effect of negative attitudes towards AI on message evaluation, but
not for message selection. However, for those with moderate levels of negative
attitudes towards AI, source disclosure decreased the preference for
AI-generated messages. Overall, the results of this series of studies showed a
slight bias against AI-generated messages once the source was disclosed, adding
to the emerging area of study that lies at the intersection of AI and
communication.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Measuring Human Contribution in AI-Assisted Content Generation [68.03658922067487]
This study raises the research question of measuring human contribution in AI-assisted content generation.
By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Comparing Large Language Model AI and Human-Generated Coaching Messages
for Behavioral Weight Loss [5.824523259910306]
Large language model (LLM) based artificial intelligence (AI) chatbots could offer more personalized and novel messages.
87 adults in a weight-loss trial rated ten coaching messages' helpfulness using a 5-point Likert scale.
arXiv Detail & Related papers (2023-12-07T05:45:24Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - Artificial Intelligence for Health Message Generation: Theory, Method,
and an Empirical Study Using Prompt Engineering [0.0]
This study introduces and examines the potential of an AI system to generate health awareness messages.
The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case.
We generated messages that could be used to raise awareness and compared them to retweeted human-generated messages.
arXiv Detail & Related papers (2022-12-14T21:13:08Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Role of Human-AI Interaction in Selective Prediction [20.11364033416315]
We study the impact of communicating different types of information to humans about the AI system's decision to defer.
We show that it is possible to significantly boost human performance by informing the human of the decision to defer, but not revealing the prediction of the AI.
arXiv Detail & Related papers (2021-12-13T16:03:13Z) - Fairness via AI: Bias Reduction in Medical Information [3.254836540242099]
We propose a novel framework of Fairness via AI, inspired by insights from medical education, sociology and antiracism.
We propose using AI to study, detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society.
arXiv Detail & Related papers (2021-09-06T01:39:48Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.