Digital literacy interventions can boost humans in discerning deepfakes
- URL: http://arxiv.org/abs/2507.23492v1
- Date: Thu, 31 Jul 2025 12:23:45 GMT
- Title: Digital literacy interventions can boost humans in discerning deepfakes
- Authors: Dominique Geissler, Claire Robertson, Stefan Feuerriegel,
- Abstract summary: Deepfakes, i.e., images generated by artificial intelligence (AI), can erode trust in institutions and compromise election outcomes.<n>Here, we compare the efficacy of five digital literacy interventions to boost people's ability to discern deepfakes.<n>Our results show that our interventions can boost deepfake discernment by up to 13 percentage points while maintaining trust in real images.
- Score: 20.57872238271025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deepfakes, i.e., images generated by artificial intelligence (AI), can erode trust in institutions and compromise election outcomes, as people often struggle to discern real images from deepfakes. Improving digital literacy can help address these challenges, yet scalable and effective approaches remain largely unexplored. Here, we compare the efficacy of five digital literacy interventions to boost people's ability to discern deepfakes: (1) textual guidance on common indicators of deepfakes; (2) visual demonstrations of these indicators; (3) a gamified exercise for identifying deepfakes; (4) implicit learning through repeated exposure and feedback; and (5) explanations of how deepfakes are generated with the help of AI. We conducted an experiment with N=1,200 participants from the United States to test the immediate and long-term effectiveness of our interventions. Our results show that our interventions can boost deepfake discernment by up to 13 percentage points while maintaining trust in real images. Altogether, our approach is scalable, suitable for diverse populations, and highly effective for boosting deepfake detection while maintaining trust in truthful information.
Related papers
- Seeing, Hearing, and Knowing Together: Multimodal Strategies in Deepfake Videos Detection [5.353466593055593]
We conducted a study with 195 participants who judged real and deepfake videos, rated their confidence, and reported the cues they relied on across visual, audio, and knowledge strategies.<n>Participants were more accurate with real videos than with deepfakes and showed lower expected calibration error for real content.<n>Our findings show which cues help or hinder detection and suggest directions for designing media literacy tools that guide effective cue use.
arXiv Detail & Related papers (2026-02-01T15:29:56Z) - DREAM: A Benchmark Study for Deepfake REalism AssessMent [12.366894730959809]
This paper presents a comprehensive benchmark called DREAM, which stands for Deepfake REalism AssessMent.<n>It is comprised of a deepfake video dataset of diverse quality, a large scale annotation that includes 140,000 realism scores and textual descriptions obtained from 3,500 human annotators.
arXiv Detail & Related papers (2025-10-11T06:41:49Z) - Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection [54.26588902144298]
We propose a knowledge-guided prompt learning method for deepfake facial image detection.<n>Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts.<n>Our proposed approach notably outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-01-01T02:18:18Z) - Understanding Audiovisual Deepfake Detection: Techniques, Challenges, Human Factors and Perceptual Insights [49.81915942821647]
Deep Learning has been successfully applied in diverse fields, and its impact on deepfake detection is no exception.
Deepfakes are fake yet realistic synthetic content that can be used deceitfully for political impersonation, phishing, slandering, or spreading misinformation.
This paper aims to improve the effectiveness of deepfake detection strategies and guide future research in cybersecurity and media integrity.
arXiv Detail & Related papers (2024-11-12T09:02:11Z) - Deep Learning Technology for Face Forgery Detection: A Survey [17.519617618071003]
Deep learning has enabled the creation or manipulation of high-fidelity facial images and videos.
This technology, also known as deepfake, has achieved dramatic progress and become increasingly popular in social media.
To diminish the risks of deepfake, it is desirable to develop powerful forgery detection methods.
arXiv Detail & Related papers (2024-09-22T01:42:01Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - EEG-Features for Generalized Deepfake Detection [3.7117930046173173]
We explore a novel approach to Deepfake detection by utilizing electroencephalography (EEG) measured from the neural processing of a human.
Preliminary results indicate that human neural processing signals can be successfully integrated into Deepfake detection frameworks.
Our study provides next steps towards the understanding of how digital realism is embedded in the human cognitive system.
arXiv Detail & Related papers (2024-05-14T12:06:44Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Common Sense Reasoning for Deepfake Detection [13.502008402754658]
State-of-the-art deepfake detection approaches rely on image-based features extracted via neural networks.
We frame deepfake detection as a Deepfake Detection VQA (DD-VQA) task and model human intuition.
We introduce a new annotated dataset and propose a Vision and Language Transformer-based framework for the DD-VQA task.
arXiv Detail & Related papers (2024-01-31T19:11:58Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Testing Human Ability To Detect Deepfake Images of Human Faces [0.0]
In 2020 a workshop consulting AI experts ranked deepfakes as the most serious AI threat.
This study aims to assess human ability to identify image deepfakes of human faces.
arXiv Detail & Related papers (2022-12-07T14:48:25Z) - Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines [16.41264978552925]
We introduce a framework for amplifying artifacts in deepfake videos to make them more detectable by people.<n>In a user study, we demonstrate that Caricatures greatly increase human detection, across video presentation times and user engagement levels.<n>We also introduce a deepfake detection model that incorporates the Artifact Attention module to increase its accuracy and robustness.
arXiv Detail & Related papers (2022-06-01T14:43:49Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.