Earn While You Reveal: Private Set Intersection that Rewards Participants
- URL: http://arxiv.org/abs/2301.03889v3
- Date: Fri, 26 Apr 2024 09:44:20 GMT
- Title: Earn While You Reveal: Private Set Intersection that Rewards Participants
- Authors: Aydin Abadi,
- Abstract summary: In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties.
We propose a multi-party PSI, called "Anesidora", that rewards parties who contribute their private input sets to the protocol.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Private Set Intersection protocols (PSIs), a non-empty result always reveals something about the private input sets of the parties. Moreover, in various variants of PSI, not all parties necessarily receive or are interested in the result. Nevertheless, to date, the literature has assumed that those parties who do not receive or are not interested in the result still contribute their private input sets to the PSI for free, although doing so would cost them their privacy. In this work, for the first time, we propose a multi-party PSI, called "Anesidora", that rewards parties who contribute their private input sets to the protocol. Anesidora is efficient; it mainly relies on symmetric key primitives and its computation and communication complexities are linear with the number of parties and set cardinality. It remains secure even if the majority of parties are corrupted by active colluding adversaries.
Related papers
- Differential Privacy on Trust Graphs [54.55190841518906]
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data.
We give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP.
arXiv Detail & Related papers (2024-10-15T20:31:04Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Private Membership Aggregation [32.97918488607827]
We consider the problem of private membership aggregation (PMA)
In PMA, a user counts the number of times a certain element is stored in a system of independent parties.
We propose achievable schemes for each of the four variants of the problem based on the concept of cross-subspace alignment.
arXiv Detail & Related papers (2023-09-07T17:33:27Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Two-party secure semiquantum summation against the collective-dephasing
noise [3.312385039704987]
The term'semi-honest' implies that TP cannot conspire with others but is able to implement all kinds oof attacks.
This protocol employs logical qubits as traveling particles to overcome the negative influence of collective-dephasing noise.
The security analysis turns out that this protocol can effectively prevent the outside attacks from Eve and the participant attacks from TP.
arXiv Detail & Related papers (2022-05-15T01:10:20Z) - Multi-party quantum private comparison of size relation with d-level
single-particle states [0.0]
Two novel multi-party quantum private comparison protocols for size relation comparison are constructed, respectively.
Each protocol can compare the size relation of secret integers from n parties rather than just the equality within one time execution.
arXiv Detail & Related papers (2022-05-13T00:34:52Z) - Multi-party Quantum Private Comparison Based on the Entanglement
Swapping of d-level Cat States and d-level Bell states [0.0]
In our protocol, n parties employ unitary operations to encode their private secrets and can compare the equality of their private secrets within one time execution of the protocol.
One party cannot obtain other parties' secrets except for the case that their secrets are identical.
The semi-honest TP cannot learn any information about these parties' secrets except the end comparison result on whether all private secrets from n parties are equal.
arXiv Detail & Related papers (2022-05-10T02:14:18Z) - Post-processing of Differentially Private Data: A Fairness Perspective [53.29035917495491]
This paper shows that post-processing causes disparate impacts on individuals or groups.
It analyzes two critical settings: the release of differentially private datasets and the use of such private datasets for downstream decisions.
It proposes a novel post-processing mechanism that is (approximately) optimal under different fairness metrics.
arXiv Detail & Related papers (2022-01-24T02:45:03Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - An Accurate, Scalable and Verifiable Protocol for Federated
Differentially Private Averaging [0.0]
We tackle challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence of malicious parties.
Our first contribution is a scalable protocol in which participants exchange correlated Gaussian noise along the edges of a network graph.
Our second contribution enables users to prove the correctness of their computations without compromising the efficiency and privacy guarantees of the protocol.
arXiv Detail & Related papers (2020-06-12T14:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.