ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation
- URL: http://arxiv.org/abs/2502.11308v2
- Date: Tue, 18 Feb 2025 10:07:55 GMT
- Title: ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation
- Authors: Yiyi Chen, Qiongkai Xu, Johannes Bjerva,
- Abstract summary: We present a Few-shot Textual Embedding Inversion Attack using ALignment and GENeration (ALGEN)<n>We find that ALGEN attacks can be effectively transferred across domains and languages, revealing key information.<n>We establish a new textual embedding inversion paradigm with broader applications for embedding alignment in NLP.
- Score: 9.220337458064765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growing popularity of Large Language Models (LLMs) and vector databases, private textual data is increasingly processed and stored as numerical embeddings. However, recent studies have proven that such embeddings are vulnerable to inversion attacks, where original text is reconstructed to reveal sensitive information. Previous research has largely assumed access to millions of sentences to train attack models, e.g., through data leakage or nearly unrestricted API access. With our method, a single data point is sufficient for a partially successful inversion attack. With as little as 1k data samples, performance reaches an optimum across a range of black-box encoders, without training on leaked data. We present a Few-shot Textual Embedding Inversion Attack using ALignment and GENeration (ALGEN), by aligning victim embeddings to the attack space and using a generative model to reconstruct text. We find that ALGEN attacks can be effectively transferred across domains and languages, revealing key information. We further examine a variety of defense mechanisms against ALGEN, and find that none are effective, highlighting the vulnerabilities posed by inversion attacks. By significantly lowering the cost of inversion and proving that embedding spaces can be aligned through one-step optimization, we establish a new textual embedding inversion paradigm with broader applications for embedding alignment in NLP.
Related papers
- Enhancing Leakage Attacks on Searchable Symmetric Encryption Using LLM-Based Synthetic Data Generation [0.0]
Searchable Symmetric Encryption (SSE) enables efficient search capabilities over encrypted data, allowing users to maintain privacy while utilizing cloud storage.
SSE schemes are vulnerable to leakage attacks that exploit access patterns, search frequency, and volume information.
We propose a novel approach that leverages large language models (LLMs), specifically GPT-4 variants, to generate synthetic documents that statistically and semantically resemble the real-world dataset of Enron emails.
arXiv Detail & Related papers (2025-04-29T04:23:10Z) - Information Leakage of Sentence Embeddings via Generative Embedding Inversion Attacks [1.6427658855248815]
In this study, we reproduce GEIA's findings across various neural sentence embedding models.
We propose a simple yet effective method without any modification to the attacker's architecture proposed in GEIA.
Our findings indicate that following our approach, an adversary party can recover meaningful sensitive information related to the pre-training knowledge of the popular models used for creating sentence embeddings.
arXiv Detail & Related papers (2025-04-23T10:50:23Z) - ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models [55.93380086403591]
Generative large language models are vulnerable to backdoor attacks.
$textitELBA-Bench$ allows attackers to inject backdoor through parameter efficient fine-tuning.
$textitELBA-Bench$ provides over 1300 experiments.
arXiv Detail & Related papers (2025-02-22T12:55:28Z) - Mitigating Privacy Risks in LLM Embeddings from Embedding Inversion [21.83264152003852]
We introduce Eguard, a novel defense mechanism designed to mitigate embedding inversion attacks.
Our approach significantly reduces privacy risks, protecting over 95% of tokens from inversion while maintaining high performance.
arXiv Detail & Related papers (2024-11-06T14:42:41Z) - Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector [97.92369017531038]
We build a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR)
We then develop a novel iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of Visual Language Models (VLMs) to achieve the detection of adversarial images against benign ones in the input.
arXiv Detail & Related papers (2024-10-30T10:33:10Z) - Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning [24.059033969435973]
This paper presents a two-stage privacy attack strategy that targets the vulnerabilities in the architecture of contemporary language models.
Our comparative experiments demonstrate superior attack performance across various datasets and scenarios.
We call for the community to recognize and address these potential privacy risks in designing large language models.
arXiv Detail & Related papers (2023-12-10T01:19:59Z) - Scalable Extraction of Training Data from (Production) Language Models [93.7746567808049]
This paper studies extractable memorization: training data that an adversary can efficiently extract by querying a machine learning model without prior knowledge of the training dataset.
We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT.
arXiv Detail & Related papers (2023-11-28T18:47:03Z) - Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game [86.66627242073724]
This paper presents a dataset of over 126,000 prompt injection attacks and 46,000 prompt-based "defenses" against prompt injection.
To the best of our knowledge, this is currently the largest dataset of human-generated adversarial examples for instruction-following LLMs.
We also use the dataset to create a benchmark for resistance to two types of prompt injection, which we refer to as prompt extraction and prompt hijacking.
arXiv Detail & Related papers (2023-11-02T06:13:36Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Sentence Embedding Leaks More Information than You Expect: Generative
Embedding Inversion Attack to Recover the Whole Sentence [37.63047048491312]
We propose a generative embedding inversion attack (GEIA) that aims to reconstruct input sequences based only on their sentence embeddings.
Given the black-box access to a language model, we treat sentence embeddings as initial tokens' representations and train or fine-tune a powerful decoder model to decode the whole sequences directly.
arXiv Detail & Related papers (2023-05-04T17:31:41Z) - Text Revealer: Private Text Reconstruction via Model Inversion Attacks
against Transformers [22.491785618530397]
We formulate emphText Revealer -- the first model inversion attack for text reconstruction against text classification with transformers.
Our attacks faithfully reconstruct private texts included in training data with access to the target model.
Our experiments demonstrate that our attacks are effective for datasets with different text lengths and can reconstruct private texts with accuracy.
arXiv Detail & Related papers (2022-09-21T17:05:12Z) - Towards Variable-Length Textual Adversarial Attacks [68.27995111870712]
It is non-trivial to conduct textual adversarial attacks on natural language processing tasks due to the discreteness of data.
In this paper, we propose variable-length textual adversarial attacks(VL-Attack)
Our method can achieve $33.18$ BLEU score on IWSLT14 German-English translation, achieving an improvement of $1.47$ over the baseline model.
arXiv Detail & Related papers (2021-04-16T14:37:27Z) - BERT-ATTACK: Adversarial Attack Against BERT Using BERT [77.82947768158132]
Adrial attacks for discrete data (such as texts) are more challenging than continuous data (such as images)
We propose textbfBERT-Attack, a high-quality and effective method to generate adversarial samples.
Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage.
arXiv Detail & Related papers (2020-04-21T13:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.