Peekaboo, I See Your Queries: Passive Attacks Against DSSE Via Intermittent Observations
- URL: http://arxiv.org/abs/2509.03806v1
- Date: Thu, 04 Sep 2025 01:47:22 GMT
- Title: Peekaboo, I See Your Queries: Passive Attacks Against DSSE Via Intermittent Observations
- Authors: Hao Nie, Wei Wang, Peng Xu, Wei Chen, Laurence T. Yang, Mauro Conti, Kaitai Liang,
- Abstract summary: DSSE allows secure searches over a dynamic encrypted database but suffers from inherent information leakage.<n>We propose Peekaboo - a new universal attack framework - and the core design relies on inferring the search pattern.<n>Our design achieves >0.9 adjusted rand index for search pattern recovery and 90% query accuracy vs. FMA's 30%.
- Score: 43.35160637778568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Searchable Symmetric Encryption (DSSE) allows secure searches over a dynamic encrypted database but suffers from inherent information leakage. Existing passive attacks against DSSE rely on persistent leakage monitoring to infer leakage patterns, whereas this work targets intermittent observation - a more practical threat model. We propose Peekaboo - a new universal attack framework - and the core design relies on inferring the search pattern and further combining it with auxiliary knowledge and other leakage. We instantiate Peekaboo over the SOTA attacks, Sap (USENIX' 21) and Jigsaw (USENIX' 24), to derive their "+" variants (Sap+ and Jigsaw+). Extensive experiments demonstrate that our design achieves >0.9 adjusted rand index for search pattern recovery and 90% query accuracy vs. FMA's 30% (CCS' 23). Peekaboo's accuracy scales with observation rounds and the number of observed queries but also it resists SOTA countermeasures, with >40% accuracy against file size padding and >80% against obfuscation.
Related papers
- Hide and Seek in Embedding Space: Geometry-based Steganography and Detection in Large Language Models [44.41218866933059]
Fine-tuned LLMs can covertly encode prompt secrets into outputs via steganographic channels.<n>We show previous schemes achieve 100% recoverability by replacing arbitrary mappings with embedding-space-derived ones.<n>We argue that detecting fine-tuning-based steganographic attacks requires approaches beyond traditional steganalysis.
arXiv Detail & Related papers (2026-01-30T10:43:43Z) - Assimilation Matters: Model-level Backdoor Detection in Vision-Language Pretrained Models [71.44858461725893]
Given a model fine-tuned by an untrusted third party, determining whether the model has been injected with a backdoor is a critical and challenging problem.<n>Existing detection methods usually rely on prior knowledge of training dataset, backdoor triggers and targets.<n>We introduce Assimilation Matters in DETection (AMDET), a novel model-level detection framework that operates without any such prior knowledge.
arXiv Detail & Related papers (2025-11-29T06:20:00Z) - PhishParrot: LLM-Driven Adaptive Crawling to Unveil Cloaked Phishing Sites [2.6217304977339473]
PhishParrot is a crawling environment optimization system designed to counter cloaking techniques.<n>A 21-day evaluation showed that PhishParrot improved detection accuracy by up to 33.8% over standard analysis systems.
arXiv Detail & Related papers (2025-08-04T04:04:07Z) - S-Leak: Leakage-Abuse Attack Against Efficient Conjunctive SSE via s-term Leakage [13.222101654411281]
Conjunctive Searchable Encryption (CSSE) enables secure conjunctive searches over encrypted data.<n>In this paper, we reveal a fundamental vulnerability in state-of-the-art CSSE schemes: s-term leakage.<n>We propose S-Leak, the first passive attack framework that progressively recovers conjunctive queries by exploiting s-term leakage and global leakage.
arXiv Detail & Related papers (2025-07-05T15:53:31Z) - Trigger without Trace: Towards Stealthy Backdoor Attack on Text-to-Image Diffusion Models [70.03122709795122]
Backdoor attacks targeting text-to-image diffusion models have advanced rapidly.<n>Current backdoor samples often exhibit two key abnormalities compared to benign samples.<n>We propose Trigger without Trace (TwT) by explicitly mitigating these consistencies.
arXiv Detail & Related papers (2025-03-22T10:41:46Z) - SABER: Model-agnostic Backdoor Attack on Chain-of-Thought in Neural Code Generation [15.274903870635095]
Chain-of-Thought (CoT) reasoning is proposed to further enhance the reliability of Code Language Models (CLMs)<n>CoT models are designed to integrate CoT reasoning effectively into language models, achieving notable improvements in code generation.<n>This study investigates the vulnerability of CoT models to backdoor injection in code generation tasks.
arXiv Detail & Related papers (2024-12-08T06:36:00Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Query Recovery from Easy to Hard: Jigsaw Attack against SSE [22.046278061025323]
symmetric encryption schemes often unintentionally disclose certain sensitive information, such as access, volume, and search patterns.
We find that the effectiveness of query recovery attacks depends on the volume/frequency distribution of keywords.
We propose a Jigsaw attack that begins by accurately identifying and recovering those distinctive queries.
arXiv Detail & Related papers (2024-03-02T09:57:05Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Multi-Granularity Detector for Vulnerability Fixes [13.653249890867222]
We propose MiDas (Multi-Granularity Detector for Vulnerability Fixes) to identify vulnerability-fixing commits.
MiDas constructs different neural networks for each level of code change granularity, corresponding to commit-level, file-level, hunk-level, and line-level.
MiDas outperforms the current state-of-the-art baseline in terms of AUC by 4.9% and 13.7% on Java and Python-based datasets.
arXiv Detail & Related papers (2023-05-23T10:06:28Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.