Query Recovery from Easy to Hard: Jigsaw Attack against SSE
- URL: http://arxiv.org/abs/2403.01155v1
- Date: Sat, 2 Mar 2024 09:57:05 GMT
- Title: Query Recovery from Easy to Hard: Jigsaw Attack against SSE
- Authors: Hao Nie, Wei Wang, Peng Xu, Xianglong Zhang, Laurence T. Yang, Kaitai Liang,
- Abstract summary: symmetric encryption schemes often unintentionally disclose certain sensitive information, such as access, volume, and search patterns.
We find that the effectiveness of query recovery attacks depends on the volume/frequency distribution of keywords.
We propose a Jigsaw attack that begins by accurately identifying and recovering those distinctive queries.
- Score: 22.046278061025323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Searchable symmetric encryption schemes often unintentionally disclose certain sensitive information, such as access, volume, and search patterns. Attackers can exploit such leakages and other available knowledge related to the user's database to recover queries. We find that the effectiveness of query recovery attacks depends on the volume/frequency distribution of keywords. Queries containing keywords with high volumes/frequencies are more susceptible to recovery, even when countermeasures are implemented. Attackers can also effectively leverage these ``special'' queries to recover all others. By exploiting the above finding, we propose a Jigsaw attack that begins by accurately identifying and recovering those distinctive queries. Leveraging the volume, frequency, and co-occurrence information, our attack achieves $90\%$ accuracy in three tested datasets, which is comparable to previous attacks (Oya et al., USENIX' 22 and Damie et al., USENIX' 21). With the same runtime, our attack demonstrates an advantage over the attack proposed by Oya et al (approximately $15\%$ more accuracy when the keyword universe size is 15k). Furthermore, our proposed attack outperforms existing attacks against widely studied countermeasures, achieving roughly $60\%$ and $85\%$ accuracy against the padding and the obfuscation, respectively. In this context, with a large keyword universe ($\geq$3k), it surpasses current state-of-the-art attacks by more than $20\%$.
Related papers
- S-Leak: Leakage-Abuse Attack Against Efficient Conjunctive SSE via s-term Leakage [13.222101654411281]
Conjunctive Searchable Encryption (CSSE) enables secure conjunctive searches over encrypted data.<n>In this paper, we reveal a fundamental vulnerability in state-of-the-art CSSE schemes: s-term leakage.<n>We propose S-Leak, the first passive attack framework that progressively recovers conjunctive queries by exploiting s-term leakage and global leakage.
arXiv Detail & Related papers (2025-07-05T15:53:31Z) - Is Your Prompt Safe? Investigating Prompt Injection Attacks Against Open-Source LLMs [28.75283403986172]
Large Language Models (LLMs) are vulnerable to prompt-based attacks, generating harmful content or sensitive information.<n>This paper studies effective prompt injection attacks against the $mathbf14$ most popular open-source LLMs on five attack benchmarks.
arXiv Detail & Related papers (2025-05-20T13:50:43Z) - Following Devils' Footprint: Towards Real-time Detection of Price Manipulation Attacks [10.782846331348379]
Price manipulation attacks are one of the notorious threats in decentralized finance (DeFi) applications.
We propose SMARTCAT, a novel approach for identifying price manipulation attacks in the pre-attack stage proactively.
We show that SMARTCAT significantly outperforms existing baselines with 91.6% recall and 100% precision.
arXiv Detail & Related papers (2025-02-06T02:11:24Z) - Poisoning Retrieval Corpora by Injecting Adversarial Passages [79.14287273842878]
We propose a novel attack for dense retrieval systems in which a malicious user generates a small number of adversarial passages.
When these adversarial passages are inserted into a large retrieval corpus, we show that this attack is highly effective in fooling these systems.
We also benchmark and compare a range of state-of-the-art dense retrievers, both unsupervised and supervised.
arXiv Detail & Related papers (2023-10-29T21:13:31Z) - Leakage-Abuse Attacks Against Forward and Backward Private Searchable Symmetric Encryption [13.057964839510596]
Dynamic searchable encryption (DSSE) enables a server to efficiently search and update over encrypted files.
To minimize the leakage during updates, a security notion named forward and backward privacy is expected for newly proposed DSSE schemes.
It remains underexplored whether forward and backward private DSSE is resilient against practical leakage-abuse attacks (LAAs)
arXiv Detail & Related papers (2023-09-09T06:39:35Z) - Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
Attack [15.017990145799189]
This paper shows that anyone can exploit an easily-accessible algorithm for silent backdoor attacks.
Via this attack, the adversary does not need to design a trigger generator as seen in prior works and only requires poisoning the data.
arXiv Detail & Related papers (2023-08-31T12:38:29Z) - Evading Black-box Classifiers Without Breaking Eggs [70.72391781899597]
Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples.
Prior work measures the cost of such attacks by the total number of queries made to the classifier.
We argue this metric is flawed and design new attacks that reduce the number of bad queries by $1.5$-$7.3times$.
arXiv Detail & Related papers (2023-06-05T14:04:53Z) - Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box
Score-Based Query Attacks [25.053383672515697]
We propose a novel defense, namely Adversarial Attack on Attackers (AAA), to confound SQAs towards incorrect attack directions.
In this way, SQAs are prevented regardless of the model's worst-case robustness.
arXiv Detail & Related papers (2022-05-24T15:10:50Z) - A Strong Baseline for Query Efficient Attacks in a Black Box Setting [3.52359746858894]
We propose a query efficient attack strategy to generate plausible adversarial examples on text classification and entailment tasks.
Our attack jointly leverages attention mechanism and locality sensitive hashing (LSH) to reduce the query count.
arXiv Detail & Related papers (2021-09-10T10:46:32Z) - QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval [56.51916317628536]
We study the query-based attack against image retrieval to evaluate its robustness against adversarial examples under the black-box setting.
A new relevance-based loss is designed to quantify the attack effects by measuring the set similarity on the top-k retrieval results before and after attacks.
Experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.
arXiv Detail & Related papers (2021-03-04T10:18:43Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - RayS: A Ray Searching Method for Hard-label Adversarial Attack [99.72117609513589]
We present the Ray Searching attack (RayS), which greatly improves the hard-label attack effectiveness as well as efficiency.
RayS attack can also be used as a sanity check for possible "falsely robust" models.
arXiv Detail & Related papers (2020-06-23T07:01:50Z) - AdvMind: Inferring Adversary Intent of Black-Box Attacks [66.19339307119232]
We present AdvMind, a new class of estimation models that infer the adversary intent of black-box adversarial attacks in a robust manner.
On average AdvMind detects the adversary intent with over 75% accuracy after observing less than 3 query batches.
arXiv Detail & Related papers (2020-06-16T22:04:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.