SWAT: A System-Wide Approach to Tunable Leakage Mitigation in Encrypted Data Stores
- URL: http://arxiv.org/abs/2306.16851v2
- Date: Wed, 15 May 2024 15:55:13 GMT
- Title: SWAT: A System-Wide Approach to Tunable Leakage Mitigation in Encrypted Data Stores
- Authors: Leqian Zheng, Lei Xu, Cong Wang, Sheng Wang, Yuke Hu, Zhan Qin, Feifei Li, Kui Ren,
- Abstract summary: SWAT is an efficient construction that progressively enables workloads, while provably mitigating system-wide leakage.
SWATis about an order of magnitude slower than an encryption-only data store that reveals various leakage patterns and is two orders of magnitude faster than a trivial zero-leakage solution.
- Score: 24.510485610026077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerous studies have underscored the significant privacy risks associated with various leakage patterns in encrypted data stores. While many solutions have been proposed to mitigate these leakages, they either (1) incur substantial overheads, (2) focus on specific subsets of leakage patterns, or (3) apply the same security notion across various workloads, thereby impeding the attainment of fine-tuned privacy-efficiency trade-offs. In light of various detrimental leakage patterns, this paper starts with an investigation into which specific leakage patterns require our focus in the contexts of key-value, range-query, and dynamic workloads, respectively. Subsequently, we introduce new security notions tailored to the specific privacy requirements of these workloads. Accordingly, we propose and instantiate SWAT, an efficient construction that progressively enables these workloads, while provably mitigating system-wide leakage via a suite of algorithms with tunable privacy-efficiency trade-offs. We conducted extensive experiments and compiled a detailed result analysis, showing the efficiency of our solution. SWATis about an order of magnitude slower than an encryption-only data store that reveals various leakage patterns and is two orders of magnitude faster than a trivial zero-leakage solution. Meanwhile, the performance of SWATremains highly competitive compared to other designs that mitigate specific types of leakage.
Related papers
- Self-Defense: Optimal QIF Solutions and Application to Website Fingerprinting [8.227044921274494]
Quantitative Information Flow (QIF) provides a robust information-theoretical framework for designing secure systems with minimal information leakage.
We propose optimal solutions for constructing a new row, in a known and unmodifiable information-theoretic channel, aiming at minimizing the leakage.
We apply our approach to the problem of website fingerprinting defense, considering a scenario where a site administrator can modify their own site but not others.
arXiv Detail & Related papers (2024-11-15T09:22:14Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Investigating Privacy Attacks in the Gray-Box Setting to Enhance Collaborative Learning Schemes [7.651569149118461]
We study privacy attacks in the gray-box setting, where the attacker has only limited access to the model.
We deploy SmartNNCrypt, a framework that tailors homomorphic encryption to protect the portions of the model posing higher privacy risks.
arXiv Detail & Related papers (2024-09-25T18:49:21Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Secure Outsourced Decryption for FHE-based Privacy-preserving Cloud Computing [3.125865379632205]
Homomorphic encryption (HE) is one solution for safeguarding data privacy, enabling encrypted data to be processed securely in the cloud.
We propose an outsourced decryption protocol for the prevailing RLWE-based fully homomorphic encryption schemes.
Our experiments demonstrate that the proposed protocol achieves up to a $67%$ acceleration in the client's local decryption, accompanied by a $50%$ reduction in space usage.
arXiv Detail & Related papers (2024-06-28T14:51:36Z) - Mimicking User Data: On Mitigating Fine-Tuning Risks in Closed Large Language Models [53.50543146583101]
Fine-tuning large language models on small datasets can enhance their performance on specific downstream tasks.
Malicious actors can subtly manipulate the structure of almost any task-specific dataset to foster significantly more dangerous model behaviors.
We propose a novel mitigation strategy that mixes in safety data which mimics the task format and prompting style of the user data.
arXiv Detail & Related papers (2024-06-12T18:33:11Z) - Leakage-Abuse Attacks Against Forward and Backward Private Searchable Symmetric Encryption [13.057964839510596]
Dynamic searchable encryption (DSSE) enables a server to efficiently search and update over encrypted files.
To minimize the leakage during updates, a security notion named forward and backward privacy is expected for newly proposed DSSE schemes.
It remains underexplored whether forward and backward private DSSE is resilient against practical leakage-abuse attacks (LAAs)
arXiv Detail & Related papers (2023-09-09T06:39:35Z) - An Efficient and Multi-private Key Secure Aggregation for Federated Learning [41.29971745967693]
We propose an efficient and multi-private key secure aggregation scheme for federated learning.
Specifically, we skillfully modify the variant ElGamal encryption technique to achieve homomorphic addition operation.
For the high dimensional deep model parameter, we introduce a super-increasing sequence to compress multi-dimensional data into 1-D.
arXiv Detail & Related papers (2023-06-15T09:05:36Z) - On Differential Privacy and Adaptive Data Analysis with Bounded Space [76.10334958368618]
We study the space complexity of the two related fields of differential privacy and adaptive data analysis.
We show that there exists a problem P that requires exponentially more space to be solved efficiently with differential privacy.
The line of work on adaptive data analysis focuses on understanding the number of samples needed for answering a sequence of adaptive queries.
arXiv Detail & Related papers (2023-02-11T14:45:31Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.