When Claims Evolve: Evaluating and Enhancing the Robustness of Embedding Models Against Misinformation Edits
- URL: http://arxiv.org/abs/2503.03417v2
- Date: Thu, 06 Mar 2025 11:00:35 GMT
- Title: When Claims Evolve: Evaluating and Enhancing the Robustness of Embedding Models Against Misinformation Edits
- Authors: Jabez Magomere, Emanuele La Malfa, Manuel Tonneau, Ashkan Kazemi, Scott Hale,
- Abstract summary: We introduce a taxonomy of six common real-world misinformation edits and propose a perturbation framework that generates valid, natural claim variations.<n>Standard embedding models struggle with user-introduced edits, while LLM-distilled embeddings offer improved robustness at a higher computational cost.<n>Our findings provide practical improvements to claim-matching systems, enabling more reliable fact-checking of evolving misinformation.
- Score: 5.443263983810103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online misinformation remains a critical challenge, and fact-checkers increasingly rely on embedding-based methods to retrieve relevant fact-checks. Yet, when debunked claims reappear in edited forms, the performance of these methods is unclear. In this work, we introduce a taxonomy of six common real-world misinformation edits and propose a perturbation framework that generates valid, natural claim variations. Our multi-stage retrieval evaluation reveals that standard embedding models struggle with user-introduced edits, while LLM-distilled embeddings offer improved robustness at a higher computational cost. Although a strong reranker helps mitigate some issues, it cannot fully compensate for first-stage retrieval gaps. Addressing these retrieval gaps, our train- and inference-time mitigation approaches enhance in-domain robustness by up to 17 percentage points and boost out-of-domain generalization by 10 percentage points over baseline models. Overall, our findings provide practical improvements to claim-matching systems, enabling more reliable fact-checking of evolving misinformation.
Related papers
- Lightweight and Direct Document Relevance Optimization for Generative Information Retrieval [49.669503570350166]
Generative information retrieval (GenIR) is a promising neural retrieval paradigm that formulates document retrieval as a document identifier (docid) generation task.
Existing GenIR models suffer from token-level misalignment, where models trained to predict the next token often fail to capture document-level relevance effectively.
We propose direct document relevance optimization (DDRO), which aligns token-level docid generation with document-level relevance estimation through direct optimization via pairwise ranking.
arXiv Detail & Related papers (2025-04-07T15:27:37Z) - Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement [37.59724553583446]
Large Language Models (LLMs) often struggle to align their responses with objective facts, resulting in factual hallucinations.<n>We introduce self-memory alignment (SMA), which fine-tunes the model on self-generated responses to precise and simple factual questions.<n>Extensive experiments show that SMA significantly improves LLMs' overall performance, with consistent enhancement across various benchmarks concerning factuality, as well as helpfulness and comprehensive skills.
arXiv Detail & Related papers (2025-02-26T13:34:52Z) - How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference [9.092229145160763]
Collaborative inference improves computational efficiency for edge devices by transmitting intermediate features to cloud models.
There is no established criterion for assessing the difficulty of model inversion attacks (MIAs)
We propose the first theoretical criterion to assess MIA difficulty in CI, identifying mutual information, entropy, and effective information volume as key influencing factors.
arXiv Detail & Related papers (2025-01-01T13:00:01Z) - Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks [16.064233621959538]
We propose a query-efficient and computation-efficient MIA that directly textbfRe-levertextbfAges the original membershitextbfP scores to mtextbfItigate the errors in textbfDifficulty calibration.
arXiv Detail & Related papers (2024-08-31T11:59:42Z) - How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models [95.44559524735308]
Large language or multimodal model based verification has been proposed to scale up online policing mechanisms for mitigating spread of false and harmful content.
We test the limits of improving foundation model performance without continual updating through an initial study of knowledge transfer.
Our results on two recent multi-modal fact-checking benchmarks, Mocheg and Fakeddit, indicate that knowledge transfer strategies can improve Fakeddit performance over the state-of-the-art by up to 1.7% and Mocheg performance by up to 2.9%.
arXiv Detail & Related papers (2024-06-29T08:39:07Z) - How We Refute Claims: Automatic Fact-Checking through Flaw
Identification and Explanation [4.376598435975689]
This paper explores the novel task of flaw-oriented fact-checking, including aspect generation and flaw identification.
We also introduce RefuteClaim, a new framework designed specifically for this task.
Given the absence of an existing dataset, we present FlawCheck, a dataset created by extracting and transforming insights from expert reviews into relevant aspects and identified flaws.
arXiv Detail & Related papers (2024-01-27T06:06:16Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - In Search of Lost Online Test-time Adaptation: A Survey [40.68806005826287]
This article presents a comprehensive survey of online test-time adaptation (OTTA)
We classify OTTA techniques into three primary categories and benchmark them using a modern backbone, the Vision Transformer (ViT)
Our findings diverge from existing literature, revealing that transformers demonstrate heightened resilience to diverse domain shifts.
arXiv Detail & Related papers (2023-10-31T05:47:33Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - WiCE: Real-World Entailment for Claims in Wikipedia [63.234352061821625]
We propose WiCE, a new fine-grained textual entailment dataset built on natural claim and evidence pairs extracted from Wikipedia.
In addition to standard claim-level entailment, WiCE provides entailment judgments over sub-sentence units of the claim.
We show that real claims in our dataset involve challenging verification and retrieval problems that existing models fail to address.
arXiv Detail & Related papers (2023-03-02T17:45:32Z) - Towards Robust and Reliable Algorithmic Recourse [11.887537452826624]
We propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts.
We also carry out detailed theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts.
arXiv Detail & Related papers (2021-02-26T17:38:52Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.