Hiding Access-pattern is Not Enough! Veil: A Storage and Communication Efficient Volume-Hiding Algorithm
- URL: http://arxiv.org/abs/2310.12491v2
- Date: Mon, 26 Feb 2024 01:14:48 GMT
- Title: Hiding Access-pattern is Not Enough! Veil: A Storage and Communication Efficient Volume-Hiding Algorithm
- Authors: Shanshan Han, Vishal Chakraborty, Michael Goodrich, Sharad Mehrotra, Shantanu Sharma,
- Abstract summary: Volume leakage can reveal ciphertexts and current user queries.
We develop a solution to prevent volume leakage, entitled Veil, that partitions the dataset by randomly mapping keys to a set of equi-sized buckets.
- Score: 7.810877430779854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses volume leakage (i.e., leakage of the number of records in the answer set) when processing keyword queries in encrypted key-value (KV) datasets. Volume leakage, coupled with prior knowledge about data distribution and/or previously executed queries, can reveal both ciphertexts and current user queries. We develop a solution to prevent volume leakage, entitled Veil, that partitions the dataset by randomly mapping keys to a set of equi-sized buckets. Veil provides a tunable mechanism for data owners to explore a trade-off between storage and communication overheads. To make buckets indistinguishable to the adversary, Veil uses a novel padding strategy that allow buckets to overlap, reducing the need to add fake records. Both theoretical and experimental results show Veil to significantly outperform existing state-of-the-art.
Related papers
- Differentially Private Learned Indexes [4.290415158471898]
We address the problem of efficiently answering predicate queries on encrypted databases, those secured by Trusted Execution Environments (TEEs)
A common strategy in modern databases to accelerate predicate queries is the use of indexes, which map attribute values (keys) to their corresponding positions in a sorted data array.
Unfortunately, indexes cannot be directly applied to encrypted databases due to strong data dependent leakages.
We propose leveraging learned indexes, a trending technique that repurposes machine learning models as indexing structures, to build more compact DP indexes.
arXiv Detail & Related papers (2024-10-28T16:04:58Z) - DREW : Towards Robust Data Provenance by Leveraging Error-Controlled Watermarking [58.37644304554906]
We propose Data Retrieval with Error-corrected codes and Watermarking (DREW)
DREW randomly clusters the reference dataset and injects unique error-controlled watermark keys into each cluster.
After locating the relevant cluster, embedding vector similarity retrieval is performed within the cluster to find the most accurate matches.
arXiv Detail & Related papers (2024-06-05T01:19:44Z) - d-DSE: Distinct Dynamic Searchable Encryption Resisting Volume Leakage in Encrypted Databases [24.259108931623203]
Dynamic Searchable Encryption (DSE) has emerged as a solution to efficiently handle and protect large-scale data storage in encrypted databases (EDBs)
Volume leakage poses a significant threat, as it enables adversaries to reconstruct search queries and potentially compromise the security and privacy of data.
Padding strategies are common countermeasures for the leakage, but they significantly increase storage and communication costs.
arXiv Detail & Related papers (2024-03-02T11:42:17Z) - Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference [78.65321721142624]
We focus on a memory bottleneck imposed by the key-value ( KV) cache.
Existing KV cache methods approach this problem by pruning or evicting large swaths of relatively less important KV pairs.
We propose LESS, a simple integration of a constant sized cache with eviction-based cache methods.
arXiv Detail & Related papers (2024-02-14T18:54:56Z) - Attendre: Wait To Attend By Retrieval With Evicted Queries in
Memory-Based Transformers for Long Context Processing [2.9733429388858714]
One effective approach is to use a FIFO memory to store keys and values of an attention sublayer from past chunks to allow subsequent queries to attend.
We propose to use eviction policies, such as LRA and LFA, to reduce the memory size and adapt to various architectures.
We also propose the Attendre layer, a wait-to-attend mechanism by retrieving the key-value memory with evicted queries in the query memory.
arXiv Detail & Related papers (2024-01-10T02:20:48Z) - Leakage-Abuse Attacks Against Forward and Backward Private Searchable Symmetric Encryption [13.057964839510596]
Dynamic searchable encryption (DSSE) enables a server to efficiently search and update over encrypted files.
To minimize the leakage during updates, a security notion named forward and backward privacy is expected for newly proposed DSSE schemes.
It remains underexplored whether forward and backward private DSSE is resilient against practical leakage-abuse attacks (LAAs)
arXiv Detail & Related papers (2023-09-09T06:39:35Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - Recovering AES Keys with a Deep Cold Boot Attack [91.22679787578438]
Cold boot attacks inspect the corrupted random access memory soon after the power has been shut down.
In this work, we combine a novel cryptographic variant of a deep error correcting code technique with a modified SAT solver scheme to apply the attack on AES keys.
Our results show that our methods outperform the state of the art attack methods by a very large margin.
arXiv Detail & Related papers (2021-06-09T07:57:01Z) - Progressively Pretrained Dense Corpus Index for Open-Domain Question
Answering [87.32442219333046]
We propose a simple and resource-efficient method to pretrain the paragraph encoder.
Our method outperforms an existing dense retrieval method that uses 7 times more computational resources for pretraining.
arXiv Detail & Related papers (2020-04-30T18:09:50Z) - DC-BERT: Decoupling Question and Document for Efficient Contextual
Encoding [90.85913515409275]
Recent studies on open-domain question answering have achieved prominent performance improvement using pre-trained language models such as BERT.
We propose DC-BERT, a contextual encoding framework that has dual BERT models: an online BERT which encodes the question only once, and an offline BERT which pre-encodes all the documents and caches their encodings.
On SQuAD Open and Natural Questions Open datasets, DC-BERT achieves 10x speedup on document retrieval, while retaining most (about 98%) of the QA performance.
arXiv Detail & Related papers (2020-02-28T08:18:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.