The Influencer Next Door: How Misinformation Creators Use GenAI
- URL: http://arxiv.org/abs/2405.13554v3
- Date: Tue, 18 Jun 2024 09:29:48 GMT
- Title: The Influencer Next Door: How Misinformation Creators Use GenAI
- Authors: Amelia Hassoun, Ariel Abonizio, Katy Osborn, Cameron Wu, Beth Goldberg,
- Abstract summary: We find that non-experts increasingly use GenAI to remix, repackage, and (re)produce content to meet their personal needs and desires.
We analyze how these understudied emergent uses of GenAI produce new or accelerated misinformation harms.
- Score: 1.1650821883155187
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Advances in generative AI (GenAI) have raised concerns about detecting and discerning AI-generated content from human-generated content. Most existing literature assumes a paradigm where 'expert' organized disinformation creators and flawed AI models deceive 'ordinary' users. Based on longitudinal ethnographic research with misinformation creators and consumers between 2022-2023, we instead find that GenAI supports bricolage work, where non-experts increasingly use GenAI to remix, repackage, and (re)produce content to meet their personal needs and desires. This research yielded four key findings: First, participants primarily used GenAI for creation, rather than truth-seeking. Second, a spreading 'influencer millionaire' narrative drove participants to become content creators, using GenAI as a productivity tool to generate a volume of (often misinformative) content. Third, GenAI lowered the barrier to entry for content creation across modalities, enticing consumers to become creators and significantly increasing existing creators' output. Finally, participants used Gen AI to learn and deploy marketing tactics to expand engagement and monetize their content. We argue for shifting analysis from the public as consumers of AI content to bricoleurs who use GenAI creatively, often without a detailed understanding of its underlying technology. We analyze how these understudied emergent uses of GenAI produce new or accelerated misinformation harms, and their implications for AI products, platforms and policies.
Related papers
- Misconceptions, Pragmatism, and Value Tensions: Evaluating Students' Understanding and Perception of Generative AI for Education [0.11704154007740832]
Students are early adopters of the technology, utilizing it in atypical ways.
Students were asked to describe 1) their understanding of GenAI; 2) their use of GenAI; 3) their opinions on the benefits, downsides, and ethical issues pertaining to its use in education.
arXiv Detail & Related papers (2024-10-29T17:41:06Z) - Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI [41.96102438774773]
This work presents the findings from a university-level competition, which challenged participants to design prompts for eliciting biased outputs from GenAI tools.
We quantitatively and qualitatively analyze the competition submissions and identify a diverse set of biases in GenAI and strategies employed by participants to induce bias in GenAI.
arXiv Detail & Related papers (2024-10-20T18:44:45Z) - Measuring Human Contribution in AI-Assisted Content Generation [68.03658922067487]
This study raises the research question of measuring human contribution in AI-assisted content generation.
By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Generative artificial intelligence in dentistry: Current approaches and future challenges [0.0]
generative AI (GenAI) models bridge the usability gap of AI by providing a natural language interface to interact with complex models.
In dental education, the student now has the opportunity to solve a plethora of questions by only prompting a GenAI model.
GenAI can also be used in dental research, where the applications range from new drug discovery to assistance in academic writing.
arXiv Detail & Related papers (2024-07-24T03:33:47Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Prompt Smells: An Omen for Undesirable Generative AI Outputs [4.105236597768038]
We propose two new concepts that will aid the research community in addressing limitations associated with the application of GenAI models.
First, we propose a definition for the "desirability" of GenAI outputs and three factors which are observed to influence it.
Second, drawing inspiration from Martin Fowler's code smells, we propose the concept of "prompt smells" and the adverse effects they are observed to have on the desirability of GenAI outputs.
arXiv Detail & Related papers (2024-01-23T10:10:01Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.