Deception Decoder: Proposing a Human-Focused Framework for Identifying AI-Generated Content on Social Media
- URL: http://arxiv.org/abs/2511.05555v1
- Date: Mon, 03 Nov 2025 15:55:27 GMT
- Title: Deception Decoder: Proposing a Human-Focused Framework for Identifying AI-Generated Content on Social Media
- Authors: C. Bowman Kerbage,
- Abstract summary: Generative AI (GenAI) poses a substantial threat to the integrity of information within the contemporary public sphere.<n>This dissertation proposes the Deception Decoder; a framework designed to support general users in identifying AI-generated misinformation and disinformation across text, image, and video.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI (GenAI) poses a substantial threat to the integrity of information within the contemporary public sphere, which increasingly relies on social media platforms as intermediaries for news consumption. At present, most research efforts are directed toward automated and machine learning-based detection methods, despite growing concerns regarding false positives, social and political biases, and susceptibility to circumvention. This dissertation instead adopts a human-centred approach. It proposes the Deception Decoder; a multimodal, systematic, and topological framework designed to support general users in identifying AI-generated misinformation and disinformation across text, image, and video. The framework was developed through a comparative synthesis of existing models, supplemented by a content analysis of GenAI-video, and refined through a small-scale focus group session. While initial testing indicates promising improvements, further research is required to confirm its generalisability across user groups, and sustained effectiveness over time.
Related papers
- Towards AI-Supported Research: a Vision of the TIB AIssistant [6.36260975777314]
We present the vision of the TIB AIssistant, a domain-agnostic human-machine collaborative platform designed to support researchers across disciplines in scientific discovery.<n>We describe the conceptual framework, system architecture, and implementation of an early prototype that demonstrates the feasibility and potential impact of our approach.
arXiv Detail & Related papers (2025-12-18T12:08:46Z) - WebResearcher: Unleashing unbounded reasoning capability in Long-Horizon Agents [72.28593628378991]
WebResearcher is an iterative deep-research paradigm that reformulates deep research as a Markov Decision Process.<n>WebResearcher achieves state-of-the-art performance, even surpassing frontier proprietary systems.
arXiv Detail & Related papers (2025-09-16T17:57:17Z) - Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.<n>We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.<n>Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - A Practical Synthesis of Detecting AI-Generated Textual, Visual, and Audio Content [4.347187436636075]
Advances in AI-generated content have led to wide adoption of large language models, diffusion-based visual generators, and synthetic audio tools.<n>These developments raise concerns about misinformation, copyright infringement, security threats, and the erosion of public trust.<n>This paper explores an extensive range of methods designed to detect and mitigate AI-generated textual, visual, and audio content.
arXiv Detail & Related papers (2025-04-02T23:27:55Z) - BEYONDWORDS is All You Need: Agentic Generative AI based Social Media Themes Extractor [2.699900017799093]
Thematic analysis of social media posts provides a major understanding of public discourse.<n>Traditional methods often struggle to capture the complexity and nuance of unstructured, large-scale text data.<n>This study introduces a novel methodology for thematic analysis that integrates tweet embeddings from pre-trained language models.
arXiv Detail & Related papers (2025-02-26T18:18:37Z) - Where are we in audio deepfake detection? A systematic analysis over generative and detection models [59.09338266364506]
SONAR is a synthetic AI-Audio Detection Framework and Benchmark.<n>It provides a comprehensive evaluation for distinguishing cutting-edge AI-synthesized auditory content.<n>It is the first framework to uniformly benchmark AI-audio detection across both traditional and foundation model-based detection systems.
arXiv Detail & Related papers (2024-10-06T01:03:42Z) - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation [0.26107298043931204]
Generative AI has ushered in the ability to generate content that closely mimics human contributions.
These models can be used to manipulate public opinion and distort perceptions, resulting in a decline in trust towards digital platforms.
This study contributes to marketing literature and practice in three ways.
arXiv Detail & Related papers (2024-03-17T13:08:28Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models [7.835719708227145]
Deepfakes and the spread of m/disinformation have emerged as formidable threats to the integrity of information ecosystems worldwide.
We highlight the mechanisms through which generative AI based on large models (LM-based GenAI) craft seemingly convincing yet fabricated contents.
We introduce an integrated framework that combines advanced detection algorithms, cross-platform collaboration, and policy-driven initiatives.
arXiv Detail & Related papers (2023-11-29T06:47:58Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.