AnonPSI: An Anonymity Assessment Framework for PSI
- URL: http://arxiv.org/abs/2311.18118v1
- Date: Wed, 29 Nov 2023 22:13:53 GMT
- Title: AnonPSI: An Anonymity Assessment Framework for PSI
- Authors: Bo Jiang, Jian Du, Qiang Yan,
- Abstract summary: Private Set Intersection (PSI) is a protocol that enables two parties to securely compute a function over the intersected part of their shared datasets.
Recent studies have highlighted its vulnerability to Set Membership Inference Attacks (SMIA)
This paper explores the evaluation of anonymity within the PSI context.
- Score: 5.301888664281537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Private Set Intersection (PSI) is a widely used protocol that enables two parties to securely compute a function over the intersected part of their shared datasets and has been a significant research focus over the years. However, recent studies have highlighted its vulnerability to Set Membership Inference Attacks (SMIA), where an adversary might deduce an individual's membership by invoking multiple PSI protocols. This presents a considerable risk, even in the most stringent versions of PSI, which only return the cardinality of the intersection. This paper explores the evaluation of anonymity within the PSI context. Initially, we highlight the reasons why existing works fall short in measuring privacy leakage, and subsequently propose two attack strategies that address these deficiencies. Furthermore, we provide theoretical guarantees on the performance of our proposed methods. In addition to these, we illustrate how the integration of auxiliary information, such as the sum of payloads associated with members of the intersection (PSI-SUM), can enhance attack efficiency. We conducted a comprehensive performance evaluation of various attack strategies proposed utilizing two real datasets. Our findings indicate that the methods we propose markedly enhance attack efficiency when contrasted with previous research endeavors. {The effective attacking implies that depending solely on existing PSI protocols may not provide an adequate level of privacy assurance. It is recommended to combine privacy-enhancing technologies synergistically to enhance privacy protection even further.
Related papers
- DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [59.66984417026933]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - Authenticated Private Set Intersection: A Merkle Tree-Based Approach for Enhancing Data Integrity [12.57031390693896]
Private Set Intersection (PSI) enables secure computation of set intersections while preserving participant privacy.<n>Standard PSI existing protocols remain vulnerable to data integrity attacks allowing malicious participants to extract additional intersection information.<n>We propose the definition of data integrity in PSI and construct two authenticated PSI schemes by integrating Merkle Trees with state-of-the-art two-party volePSI and multi-party mPSI protocols.
arXiv Detail & Related papers (2025-06-05T05:28:59Z) - FedRE: Robust and Effective Federated Learning with Privacy Preference [20.969342596181246]
Federated Learning (FL) employs gradient aggregation at the server for distributed training to prevent the privacy leakage of raw data.<n>Private information can still be divulged through the analysis of uploaded gradients from clients.<n>Existing methods fail to take practical issues into account by merely perturbing each sample with the same mechanism.
arXiv Detail & Related papers (2025-05-08T01:50:27Z) - Data Poisoning Attacks to Locally Differentially Private Range Query Protocols [15.664794320925562]
Local Differential Privacy (LDP) has been widely adopted to protect user privacy in decentralized data collection.
Recent studies have revealed that LDP protocols are vulnerable to data poisoning attacks.
We present the first study on data poisoning attacks targeting LDP range query protocols.
arXiv Detail & Related papers (2025-03-05T12:40:34Z) - Membership Inference Attacks Against In-Context Learning [26.57639819629732]
We present the first membership inference attack tailored for In-Context Learning (ICL)
We propose four attack strategies tailored to various constrained scenarios.
We investigate three potential defenses targeting data, instruction, and output.
arXiv Detail & Related papers (2024-09-02T17:23:23Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Provable Offline Preference-Based Reinforcement Learning [95.00042541409901]
We investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback.
We consider the general reward setting where the reward can be defined over the whole trajectory.
We introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability.
arXiv Detail & Related papers (2023-05-24T07:11:26Z) - On the Privacy Risks of Algorithmic Recourse [17.33484111779023]
We make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data.
Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
arXiv Detail & Related papers (2022-11-10T09:04:24Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Conformal Off-Policy Prediction in Contextual Bandits [54.67508891852636]
Conformal off-policy prediction can output reliable predictive intervals for the outcome under a new target policy.
We provide theoretical finite-sample guarantees without making any additional assumptions beyond the standard contextual bandit setup.
arXiv Detail & Related papers (2022-06-09T10:39:33Z) - Composable finite-size effects in free-space CV-QKD systems [0.0]
We consider two classical post-processing strategies, post-selection of high-transmissivity data and data clusterization, to reduce the fluctuation-induced noise of the channel.
We show that these strategies are still able to enhance the finite-size key rate against both individual and collective attacks.
arXiv Detail & Related papers (2020-02-10T00:22:30Z) - Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models [6.902994369582068]
We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
arXiv Detail & Related papers (2019-12-31T15:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.