A Representative Study on Human Detection of Artificially Generated
Media Across Countries
- URL: http://arxiv.org/abs/2312.05976v1
- Date: Sun, 10 Dec 2023 19:34:52 GMT
- Title: A Representative Study on Human Detection of Artificially Generated
Media Across Countries
- Authors: Joel Frank, Franziska Herbert, Jonas Ricker, Lea Sch\"onherr, Thorsten
Eisenhofer, Asja Fischer, Markus D\"urmuth, Thorsten Holz
- Abstract summary: State-of-the-art forgeries are almost indistinguishable from "real" media.
The majority of participants simply guessing when asked to rate them as human- or machine-generated.
In addition, AI-generated media receive is voted more human like across all media types and all countries.
- Score: 28.99277150719848
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-generated media has become a threat to our digital society as we know it.
These forgeries can be created automatically and on a large scale based on
publicly available technology. Recognizing this challenge, academics and
practitioners have proposed a multitude of automatic detection strategies to
detect such artificial media. However, in contrast to these technical advances,
the human perception of generated media has not been thoroughly studied yet.
In this paper, we aim at closing this research gap. We perform the first
comprehensive survey into people's ability to detect generated media, spanning
three countries (USA, Germany, and China) with 3,002 participants across audio,
image, and text media. Our results indicate that state-of-the-art forgeries are
almost indistinguishable from "real" media, with the majority of participants
simply guessing when asked to rate them as human- or machine-generated. In
addition, AI-generated media receive is voted more human like across all media
types and all countries. To further understand which factors influence people's
ability to detect generated media, we include personal variables, chosen based
on a literature review in the domains of deepfake and fake news research. In a
regression analysis, we found that generalized trust, cognitive reflection, and
self-reported familiarity with deepfakes significantly influence participant's
decision across all media categories.
Related papers
- Deepfake Media Generation and Detection in the Generative AI Era: A Survey and Outlook [101.30779332427217]
We survey deepfake generation and detection techniques, including the most recent developments in the field.
We identify various kinds of deepfakes, according to the procedure used to alter or generate the fake content.
We develop a novel multimodal benchmark to evaluate deepfake detectors on out-of-distribution content.
arXiv Detail & Related papers (2024-11-29T08:29:25Z) - A Survey of Stance Detection on Social Media: New Directions and Perspectives [50.27382951812502]
stance detection has emerged as a crucial subfield within affective computing.
Recent years have seen a surge of research interest in developing effective stance detection methods.
This paper provides a comprehensive survey of stance detection techniques on social media.
arXiv Detail & Related papers (2024-09-24T03:06:25Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - AMMeBa: A Large-Scale Survey and Dataset of Media-Based Misinformation In-The-Wild [1.4193873432298625]
We show the results of a two-year study using human raters to annotate online media-based misinformation.
We show the rise of generative AI-based content in misinformation claims.
We also show "simple" methods dominated historically, particularly context manipulations.
arXiv Detail & Related papers (2024-05-19T23:05:53Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - As Good As A Coin Toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli [0.0]
The principal defense against being misled by synthetic media relies on the ability of the human observer to visually and auditorily discern between real and fake.
We conducted a perceptual study with 1276 participants to assess how accurate people were at distinguishing synthetic images, audio only, video only, and audiovisual stimuli from authentic.
arXiv Detail & Related papers (2024-03-25T13:39:33Z) - The Media Bias Taxonomy: A Systematic Literature Review on the Forms and
Automated Detection of Media Bias [5.579028648465784]
This article summarizes the research on computational methods to detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022.
We show that media bias detection is a highly active research field, in which transformer-based classification approaches have led to significant improvements in recent years.
arXiv Detail & Related papers (2023-12-26T18:13:52Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - The Deepfake Detection Dilemma: A Multistakeholder Exploration of
Adversarial Dynamics in Synthetic Media [0.0]
Synthetic media detection technologies label media as either synthetic or non-synthetic.
As detection practices become more accessible, they become more easily circumvented.
This work concludes that there is no "best" approach to navigating the detector dilemma.
arXiv Detail & Related papers (2021-02-11T16:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.