The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance
- URL: http://arxiv.org/abs/2602.02100v1
- Date: Mon, 02 Feb 2026 13:45:12 GMT
- Title: The Verification Crisis: Expert Perceptions of GenAI Disinformation and the Case for Reproducible Provenance
- Authors: Alexander Loth, Martin Kappes, Marc-Oliver Pahl,
- Abstract summary: This article presents findings from the first wave of a longitudinal expert perception survey (N=21) involving AI researchers, policymakers, and disinformation specialists.<n>It examines the perceived severity of multimodal threats -- text, image, audio, and video -- and evaluates current mitigation strategies.<n>Results indicate that while deepfake video presents immediate "shock" value, large-scale text generation poses a systemic risk of "epistemic fragmentation"
- Score: 47.03825808787752
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growth of Generative Artificial Intelligence (GenAI) has shifted disinformation production from manual fabrication to automated, large-scale manipulation. This article presents findings from the first wave of a longitudinal expert perception survey (N=21) involving AI researchers, policymakers, and disinformation specialists. It examines the perceived severity of multimodal threats -- text, image, audio, and video -- and evaluates current mitigation strategies. Results indicate that while deepfake video presents immediate "shock" value, large-scale text generation poses a systemic risk of "epistemic fragmentation" and "synthetic consensus," particularly in the political domain. The survey reveals skepticism about technical detection tools, with experts favoring provenance standards and regulatory frameworks despite implementation barriers. GenAI disinformation research requires reproducible methods. The current challenge is measurement: without standardized benchmarks and reproducibility checklists, tracking or countering synthetic media remains difficult. We propose treating information integrity as an infrastructure with rigor in data provenance and methodological reproducibility.
Related papers
- Benchmarking Knowledge-Extraction Attack and Defense on Retrieval-Augmented Generation [50.87199039334856]
Retrieval-Augmented Generation (RAG) has become a cornerstone of knowledge-intensive applications.<n>Recent studies show that knowledge-extraction attacks can recover sensitive knowledge-base content through maliciously crafted queries.<n>We introduce the first systematic benchmark for knowledge-extraction attacks on RAG systems.
arXiv Detail & Related papers (2026-02-10T01:27:46Z) - Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems [47.03825808787752]
This paper transitions from literature review to practical countermeasures.<n>We report on improved AI-generated content through Large Language Models (LLMs) and multimodal systems.<n>We discuss mitigation strategies including LLM-based detection, inoculation approaches, and the dual-use nature of generative AI.
arXiv Detail & Related papers (2026-01-29T16:42:22Z) - Explainable AI in Big Data Fraud Detection [3.5429848204449694]
This paper examines how explainable artificial intelligence (XAI) can be integrated into Big Data analytics pipelines for fraud detection and risk management.<n>We review key Big Data characteristics and survey major analytical tools, including distributed storage systems, streaming platforms, and advanced fraud detection models.<n>We identify key research gaps related to scalability, real-time processing, and explainability for graph and temporal models.<n>The paper concludes with open research directions in scalable XAI, privacy-aware explanations, and standardized evaluation methods for explainable fraud detection systems.
arXiv Detail & Related papers (2025-12-17T23:40:54Z) - Identity Card Presentation Attack Detection: A Systematic Review [7.7489419818764596]
Deep Learning has driven advances in Presentation Attack Detection.<n>The field is fundamentally limited by a lack of data and the poor generalisation of models.<n>This review consolidates our findings, identifies critical research gaps, and outlines a prescriptive roadmap for future research.
arXiv Detail & Related papers (2025-11-08T15:55:37Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.<n>We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.<n>Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - A Practical Synthesis of Detecting AI-Generated Textual, Visual, and Audio Content [4.347187436636075]
Advances in AI-generated content have led to wide adoption of large language models, diffusion-based visual generators, and synthetic audio tools.<n>These developments raise concerns about misinformation, copyright infringement, security threats, and the erosion of public trust.<n>This paper explores an extensive range of methods designed to detect and mitigate AI-generated textual, visual, and audio content.
arXiv Detail & Related papers (2025-04-02T23:27:55Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Detecting AI-Generated Text: Factors Influencing Detectability with Current Methods [13.14749943120523]
Knowing whether a text was produced by human or artificial intelligence (AI) is important to determining its trustworthiness.<n>State-of-the art approaches to AIGT detection include watermarking, statistical and stylistic analysis, and machine learning classification.<n>We aim to provide insight into the salient factors that combine to determine how "detectable" AIGT text is under different scenarios.
arXiv Detail & Related papers (2024-06-21T18:31:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.