Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments
- URL: http://arxiv.org/abs/2508.16618v1
- Date: Wed, 13 Aug 2025 18:18:24 GMT
- Title: Seeing Isn't Believing: Addressing the Societal Impact of Deepfakes in Low-Tech Environments
- Authors: Azmine Toushik Wasi, Rahatun Nesa Priti, Mahir Absar Khan, Abdur Rahman, Mst Rafia Islam,
- Abstract summary: Deepfakes pose significant risks to political stability, social trust, and economic well-being.<n>This work aims to understand how these technologies are perceived and impact resource-limited communities.
- Score: 5.183876599841578
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deepfakes, AI-generated multimedia content that mimics real media, are becoming increasingly prevalent, posing significant risks to political stability, social trust, and economic well-being, especially in developing societies with limited media literacy and technological infrastructure. This work aims to understand how these technologies are perceived and impact resource-limited communities. We conducted a survey to assess public awareness, perceptions, and experiences with deepfakes, leading to the development of a comprehensive framework for prevention, detection, and mitigation in tech-limited environments. Our findings reveal critical knowledge gaps and a lack of effective detection tools, emphasizing the need for targeted education and accessible verification solutions. This work offers actionable insights to support vulnerable populations and calls for further interdisciplinary efforts to tackle deepfake challenges globally, particularly in the Global South.
Related papers
- Information Access of the Oppressed: A Problem-Posing Framework for Envisioning Emancipatory Information Access Platforms [5.801539233803859]
Online information access platforms are targets of authoritarian capture.<n>We explore this question through the lens of Paulo Freire's theories of emancipatory pedagogy.
arXiv Detail & Related papers (2026-01-14T16:15:26Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - TRIED: Truly Innovative and Effective AI Detection Benchmark, developed by WITNESS [0.0]
WITNESS introduces the Truly Innovative and Effective AI Detection (TRIED) Benchmark.<n>The report outlines how detection tools must evolve to become truly innovative and relevant.<n>It offers practical guidance for developers, policy actors, and standards bodies to design accountable, transparent, and user-centered detection solutions.
arXiv Detail & Related papers (2025-04-30T10:18:19Z) - Open Problems in Mechanistic Interpretability [61.44773053835185]
Mechanistic interpretability aims to understand the computational mechanisms underlying neural networks' capabilities.<n>Despite recent progress toward these goals, there are many open problems in the field that require solutions.
arXiv Detail & Related papers (2025-01-27T20:57:18Z) - Deepfake Technology Unveiled: The Commoditization of AI and Its Impact on Digital Trust [1.1402735220778926]
Deepfake technology enables fraud, misinformation, and the erosion of authenticity in multimedia.<n>Using cost-effective, easy to use tools such as Runway, Rope, and ElevenLabs, we explore how realistic deepfakes can be created with limited resources.<n>We emphasize the urgent need for regulatory frameworks, public awareness, and collaborative efforts to maintain trust in digital media.
arXiv Detail & Related papers (2025-01-24T18:02:49Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Understanding Audiovisual Deepfake Detection: Techniques, Challenges, Human Factors and Perceptual Insights [49.81915942821647]
Deep Learning has been successfully applied in diverse fields, and its impact on deepfake detection is no exception.
Deepfakes are fake yet realistic synthetic content that can be used deceitfully for political impersonation, phishing, slandering, or spreading misinformation.
This paper aims to improve the effectiveness of deepfake detection strategies and guide future research in cybersecurity and media integrity.
arXiv Detail & Related papers (2024-11-12T09:02:11Z) - A Survey of Stance Detection on Social Media: New Directions and Perspectives [50.27382951812502]
stance detection has emerged as a crucial subfield within affective computing.
Recent years have seen a surge of research interest in developing effective stance detection methods.
This paper provides a comprehensive survey of stance detection techniques on social media.
arXiv Detail & Related papers (2024-09-24T03:06:25Z) - Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community [0.060998359915727114]
We look at the origins and risks of AI hype to the research community and society more broadly.
We propose a set of measures that researchers, regulators, and the public can take to mitigate these risks and reduce the prevalence of unfounded claims about the technology.
arXiv Detail & Related papers (2024-08-08T20:47:17Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.