ConspirED: A Dataset for Cognitive Traits of Conspiracy Theories and Large Language Model Safety
- URL: http://arxiv.org/abs/2508.20468v1
- Date: Thu, 28 Aug 2025 06:39:25 GMT
- Title: ConspirED: A Dataset for Cognitive Traits of Conspiracy Theories and Large Language Model Safety
- Authors: Luke Bates, Max Glockner, Preslav Nakov, Iryna Gurevych,
- Abstract summary: ConspirED is the first dataset of conspiratorial content annotated for general cognitive traits.<n>We develop computational models that identify conspiratorial traits and determine dominant traits in text excerpts.<n>We evaluate large language/reasoning model (LLM/LRM) robustness to conspiratorial inputs.
- Score: 87.90209836101353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conspiracy theories erode public trust in science and institutions while resisting debunking by evolving and absorbing counter-evidence. As AI-generated misinformation becomes increasingly sophisticated, understanding rhetorical patterns in conspiratorial content is important for developing interventions such as targeted prebunking and assessing AI vulnerabilities. We introduce ConspirED (CONSPIR Evaluation Dataset), which captures the cognitive traits of conspiratorial ideation in multi-sentence excerpts (80--120 words) from online conspiracy articles, annotated using the CONSPIR cognitive framework (Lewandowsky and Cook, 2020). ConspirED is the first dataset of conspiratorial content annotated for general cognitive traits. Using ConspirED, we (i) develop computational models that identify conspiratorial traits and determine dominant traits in text excerpts, and (ii) evaluate large language/reasoning model (LLM/LRM) robustness to conspiratorial inputs. We find that both are misaligned by conspiratorial content, producing output that mirrors input reasoning patterns, even when successfully deflecting comparable fact-checked misinformation.
Related papers
- Do Androids Dream of Unseen Puppeteers? Probing for a Conspiracy Mindset in Large Language Models [6.909716378472136]
We investigate whether Large Language Models (LLMs) exhibit conspiratorial tendencies, whether they display sociodemographic biases in this domain, and how easily they can be conditioned into adopting conspiratorial perspectives.
arXiv Detail & Related papers (2025-11-05T18:28:28Z) - WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for Open-Ended Deep Research [73.58638285105971]
This paper tackles textbfopen-ended deep research (OEDR), a complex challenge where AI agents must synthesize vast web-scale information into insightful reports.<n>We introduce textbfWebWeaver, a novel dual-agent framework that emulates the human research process.<n>Our framework establishes a new state-of-the-art across major OEDR benchmarks, including DeepResearch Bench, DeepConsult, and DeepResearchGym.
arXiv Detail & Related papers (2025-09-16T17:57:21Z) - CoCoNUTS: Concentrating on Content while Neglecting Uninformative Textual Styles for AI-Generated Peer Review Detection [60.52240468810558]
We introduce CoCoNUTS, a content-oriented benchmark built upon a fine-grained dataset of AI-generated peer reviews.<n>We also develop CoCoDet, an AI review detector via a multi-task learning framework, to achieve more accurate and robust detection of AI involvement in review content.
arXiv Detail & Related papers (2025-08-28T06:03:11Z) - Seeing Through Deception: Uncovering Misleading Creator Intent in Multimodal News with Vision-Language Models [65.23999399834638]
We introduce DeceptionDecoded, a benchmark of 12,000 image-caption pairs grounded in trustworthy reference articles.<n>The dataset captures both misleading and non-misleading cases, spanning manipulations across visual and textual modalities.<n>It supports three intent-centric tasks: misleading intent detection, misleading source attribution, and creator desire inference.
arXiv Detail & Related papers (2025-05-21T13:14:32Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.<n>We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.<n>Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - Unveiling Online Conspiracy Theorists: a Text-Based Approach and Characterization [42.242551342068374]
We conducted a comprehensive analysis of two distinct X datasets: one comprising users with conspiracy theorizing patterns and another made of users lacking such tendencies.
Our findings reveal marked differences in the lexicon and language adopted by conspiracy theorists with respect to other users.
We developed a machine learning classifier capable of identifying users who propagate conspiracy theories based on a rich set of 871 features.
arXiv Detail & Related papers (2024-05-21T08:07:38Z) - Classifying Conspiratorial Narratives At Scale: False Alarms and Erroneous Connections [4.594855794205588]
This work establishes a general scheme for classifying discussions related to conspiracy theories.
We leverage human-labeled ground truth to train a BERT-based model for classifying online CTs.
We present the first large-scale classification study using posts from the most active conspiracy-related Reddit forums.
arXiv Detail & Related papers (2024-03-29T20:29:12Z) - The Anatomy of Conspirators: Unveiling Traits using a Comprehensive
Twitter Dataset [0.0]
We present a novel methodology for constructing a Twitter dataset that encompasses accounts engaged in conspiracy-related activities throughout the year 2022.
This comprehensive collection effort yielded a total of 15K accounts and 37M tweets extracted from their timelines.
We conduct a comparative analysis of the two groups across three dimensions: topics, profiles, and behavioral characteristics.
arXiv Detail & Related papers (2023-08-29T09:35:23Z) - Codes, Patterns and Shapes of Contemporary Online Antisemitism and
Conspiracy Narratives -- an Annotation Guide and Labeled German-Language
Dataset in the Context of COVID-19 [0.0]
Antisemitic and conspiracy theory content on the Internet makes data-driven algorithmic approaches essential.
We develop an annotation guide for antisemitic and conspiracy theory online content in the context of the COVID-19 pandemic.
We provide working definitions, including specific forms of antisemitism such as encoded and post-Holocaust antisemitism.
arXiv Detail & Related papers (2022-10-13T10:32:39Z) - Attacking Open-domain Question Answering by Injecting Misinformation [116.25434773461465]
We study the risk of misinformation to Question Answering (QA) models by investigating the sensitivity of open-domain QA models to misinformation documents.
Experiments show that QA models are vulnerable to even small amounts of evidence contamination brought by misinformation.
We discuss the necessity of building a misinformation-aware QA system that integrates question-answering and misinformation detection.
arXiv Detail & Related papers (2021-10-15T01:55:18Z) - The Truth is Out There: Investigating Conspiracy Theories in Text
Generation [66.01545519772527]
We investigate the propensity for language models to generate conspiracy theory text.
Our study focuses on testing these models for the elicitation of conspiracy theories.
We introduce a new dataset consisting of conspiracy theory topics, machine-generated conspiracy theories, and human-written conspiracy theories.
arXiv Detail & Related papers (2021-01-02T05:47:39Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.