Labeling Synthetic Content: User Perceptions of Warning Label Designs for AI-generated Content on Social Media
- URL: http://arxiv.org/abs/2503.05711v1
- Date: Fri, 14 Feb 2025 10:35:42 GMT
- Title: Labeling Synthetic Content: User Perceptions of Warning Label Designs for AI-generated Content on Social Media
- Authors: Dilrukshi Gamage, Dilki Sewwandi, Min Zhang, Arosha Bandara,
- Abstract summary: We devised and assessed ten distinct label design samples that varied across the dimensions of sentiment, color/iconography, positioning, and level of detail.<n>Our experimental study involved 911 participants randomly assigned to these ten label designs and a control group evaluating social media content.<n>The results demonstrate that the presence of labels had a significant effect on the users belief that the content is AI generated, deepfake, or edited by AI.
- Score: 16.5125333136211
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this research, we explored the efficacy of various warning label designs for AI-generated content on social media platforms e.g., deepfakes. We devised and assessed ten distinct label design samples that varied across the dimensions of sentiment, color/iconography, positioning, and level of detail. Our experimental study involved 911 participants randomly assigned to these ten label designs and a control group evaluating social media content. We explored their perceptions relating to 1. Belief in the content being AI-generated, 2. Trust in the labels and 3. Social Media engagement perceptions of the content. The results demonstrate that the presence of labels had a significant effect on the users belief that the content is AI generated, deepfake, or edited by AI. However their trust in the label significantly varied based on the label design. Notably, having labels did not significantly change their engagement behaviors, such as like, comment, and sharing. However, there were significant differences in engagement based on content type: political and entertainment. This investigation contributes to the field of human computer interaction by defining a design space for label implementation and providing empirical support for the strategic use of labels to mitigate the risks associated with synthetically generated media.
Related papers
- Sanitizing Manufacturing Dataset Labels Using Vision-Language Models [1.0819408603463427]
This paper introduces Vision-Language Sanitization and Refinement (VLSR), which is a vision-language-based framework for label sanitization and refinement.<n>The method embeds both images and their associated textual labels into a shared semantic space leveraging the CLIP vision-language model.<n> Experimental results demonstrate that the VLSR framework successfully identifies problematic labels and improves label consistency.
arXiv Detail & Related papers (2025-06-30T02:13:09Z) - Security Benefits and Side Effects of Labeling AI-Generated Images [27.771584371064968]
We study the implications of labels, including the possibility of mislabeling.<n>We conduct a pre-registered online survey with over 1300 U.S. and EU participants.<n>We find the undesired side effect that human-made images conveying inaccurate claims were perceived as more credible in the presence of labels.
arXiv Detail & Related papers (2025-05-28T20:24:45Z) - Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects [33.16943695290958]
One prominent policy proposal requires explicitly labeling AI-generated content to increase transparency and encourage critical thinking about the information.
We conducted a survey experiment on a diverse sample of Americans.
We found that messages were generally persuasive, influencing participants' views of the policies by 9.74 percentage points on average.
arXiv Detail & Related papers (2025-04-14T04:22:39Z) - Mixed Blessing: Class-Wise Embedding guided Instance-Dependent Partial Label Learning [53.64180787439527]
In partial label learning (PLL), every sample is associated with a candidate label set comprising the ground-truth label and several noisy labels.<n>For the first time, we create class-wise embeddings for each sample, which allow us to explore the relationship of instance-dependent noisy labels.<n>To reduce the high label ambiguity, we introduce the concept of class prototypes containing global feature information.
arXiv Detail & Related papers (2024-12-06T13:25:39Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - The impact of labeling automotive AI as "trustworthy" or "reliable" on user evaluation and technology acceptance [0.0]
This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies.
Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI.
Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence.
arXiv Detail & Related papers (2024-08-20T14:48:24Z) - Leveraging Label Information for Multimodal Emotion Recognition [22.318092635089464]
Multimodal emotion recognition (MER) aims to detect the emotional status of a given expression by combining the speech and text information.
We propose a novel approach for MER by leveraging label information.
We devise a novel label-guided attentive fusion module to fuse the label-aware text and speech representations for emotion classification.
arXiv Detail & Related papers (2023-09-05T10:26:32Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Privacy-Aware Crowd Labelling for Machine Learning Tasks [3.6930948691311007]
We propose a privacy preserving text labelling method for varying applications, based in crowdsourcing.
We transform text with different levels of privacy, and analyse the effectiveness of the transformation with regards to label correlation and consistency.
arXiv Detail & Related papers (2022-02-03T18:14:45Z) - Learning to Aggregate and Refine Noisy Labels for Visual Sentiment
Analysis [69.48582264712854]
We propose a robust learning method to perform robust visual sentiment analysis.
Our method relies on an external memory to aggregate and filter noisy labels during training.
We establish a benchmark for visual sentiment analysis with label noise using publicly available datasets.
arXiv Detail & Related papers (2021-09-15T18:18:28Z) - A Study on the Autoregressive and non-Autoregressive Multi-label
Learning [77.11075863067131]
We propose a self-attention based variational encoder-model to extract the label-label and label-feature dependencies jointly.
Our model can therefore be used to predict all labels in parallel while still including both label-label and label-feature dependencies.
arXiv Detail & Related papers (2020-12-03T05:41:44Z) - Exploiting Context for Robustness to Label Noise in Active Learning [47.341705184013804]
We address the problems of how a system can identify which of the queried labels are wrong and how a multi-class active learning system can be adapted to minimize the negative impact of label noise.
We construct a graphical representation of the unlabeled data to encode these relationships and obtain new beliefs on the graph when noisy labels are available.
This is demonstrated in three different applications: scene classification, activity classification, and document classification.
arXiv Detail & Related papers (2020-10-18T18:59:44Z) - Modeling Label Semantics for Predicting Emotional Reactions [21.388457946558976]
Predicting how events induce emotions in the characters of a story is typically seen as a standard multi-label classification task.
We propose that the semantics of emotion labels can guide a model's attention when representing the input story.
We explicitly model label classes via label embeddings, and add mechanisms that track label-label correlations both during training and inference.
arXiv Detail & Related papers (2020-06-09T20:04:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.