Backdoor Attacks on Dense Passage Retrievers for Disseminating
Misinformation
- URL: http://arxiv.org/abs/2402.13532v1
- Date: Wed, 21 Feb 2024 05:03:07 GMT
- Title: Backdoor Attacks on Dense Passage Retrievers for Disseminating
Misinformation
- Authors: Quanyu Long, Yue Deng, LeiLei Gan, Wenya Wang, and Sinno Jialin Pan
- Abstract summary: We introduce a novel scenario where the attackers aim to covertly disseminate targeted misinformation through a retrieval system.
To achieve this, we propose a backdoor attack triggered by grammar errors in dense passage retrieval.
Our approach ensures that attacked models can function normally for standard queries but are manipulated to return passages specified by the attacker.
- Score: 40.131588857153275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense retrievers and retrieval-augmented language models have been widely
used in various NLP applications. Despite being designed to deliver reliable
and secure outcomes, the vulnerability of retrievers to potential attacks
remains unclear, raising concerns about their security. In this paper, we
introduce a novel scenario where the attackers aim to covertly disseminate
targeted misinformation, such as hate speech or advertisement, through a
retrieval system. To achieve this, we propose a perilous backdoor attack
triggered by grammar errors in dense passage retrieval. Our approach ensures
that attacked models can function normally for standard queries but are
manipulated to return passages specified by the attacker when users
unintentionally make grammatical mistakes in their queries. Extensive
experiments demonstrate the effectiveness and stealthiness of our proposed
attack method. When a user query is error-free, our model consistently
retrieves accurate information while effectively filtering out misinformation
from the top-k results. However, when a query contains grammar errors, our
system shows a significantly higher success rate in fetching the targeted
content.
Related papers
- Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - Document Screenshot Retrievers are Vulnerable to Pixel Poisoning Attacks [72.4498910775871]
Vision-language model (VLM)-based retrievers leverage document screenshots embedded as vectors to enable effective search and offer a simplified pipeline over traditional text-only methods.
In this study, we propose three pixel poisoning attack methods designed to compromise VLM-based retrievers.
arXiv Detail & Related papers (2025-01-28T12:40:37Z) - Punctuation Matters! Stealthy Backdoor Attack for Language Models [36.91297828347229]
A backdoored model produces normal outputs on the clean samples while performing improperly on the texts.
Some attack methods even cause grammatical issues or change the semantic meaning of the original texts.
We propose a novel stealthy backdoor attack method against textual models, which is called textbfPuncAttack.
arXiv Detail & Related papers (2023-12-26T03:26:20Z) - Poisoning Retrieval Corpora by Injecting Adversarial Passages [79.14287273842878]
We propose a novel attack for dense retrieval systems in which a malicious user generates a small number of adversarial passages.
When these adversarial passages are inserted into a large retrieval corpus, we show that this attack is highly effective in fooling these systems.
We also benchmark and compare a range of state-of-the-art dense retrievers, both unsupervised and supervised.
arXiv Detail & Related papers (2023-10-29T21:13:31Z) - Mitigating Backdoor Poisoning Attacks through the Lens of Spurious
Correlation [43.75579468533781]
backdoors can be implanted through crafting training instances with a specific trigger and a target label.
This paper posits that backdoor poisoning attacks exhibit emphspurious correlation between simple text features and classification labels.
Our empirical study reveals that the malicious triggers are highly correlated to their target labels.
arXiv Detail & Related papers (2023-05-19T11:18:20Z) - Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in
Language Models [41.1058288041033]
We propose ProAttack, a novel and efficient method for performing clean-label backdoor attacks based on the prompt.
Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.
arXiv Detail & Related papers (2023-05-02T06:19:36Z) - Detecting Backdoors in Deep Text Classifiers [43.36440869257781]
We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models.
Our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning.
arXiv Detail & Related papers (2022-10-11T07:48:03Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.