Privacy Against Hypothesis-Testing Adversaries for Quantum Computing
- URL: http://arxiv.org/abs/2302.12405v1
- Date: Fri, 24 Feb 2023 02:10:27 GMT
- Title: Privacy Against Hypothesis-Testing Adversaries for Quantum Computing
- Authors: Farhad Farokhi
- Abstract summary: This paper presents a novel definition for data privacy in quantum computing based on quantum hypothesis testing.
The relationship between privacy against hypothesis-testing adversaries, defined in this paper, and quantum differential privacy are then examined.
- Score: 14.095523601311374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel definition for data privacy in quantum computing based on quantum
hypothesis testing is presented in this paper. The parameters in this privacy
notion possess an operational interpretation based on the success/failure of an
omnipotent adversary being able to distinguish the private categories to which
the data belongs using arbitrary measurements on quantum states. Important
properties of post processing and composition are then proved for the new
notion of privacy. The relationship between privacy against hypothesis-testing
adversaries, defined in this paper, and quantum differential privacy are then
examined. It is shown that these definitions are intertwined in some parameter
regimes. This enables us to provide an interpretation for the privacy budget in
quantum differential privacy based on its relationship with privacy against
hypothesis testing adversaries.
Related papers
- The Effect of Quantization in Federated Learning: A Rényi Differential Privacy Perspective [15.349042342071439]
Federated Learning (FL) is an emerging paradigm that holds great promise for privacy-preserving machine learning using distributed data.
To enhance privacy, FL can be combined with Differential Privacy (DP), which involves adding Gaussian noise to the model weights.
This research paper investigates the impact of quantization on privacy in FL systems.
arXiv Detail & Related papers (2024-05-16T13:50:46Z) - Differential Privacy Preserving Quantum Computing via Projection Operator Measurements [15.024190374248088]
In classical computing, we can incorporate the concept of differential privacy (DP) to meet the standard of privacy preservation.
In the quantum computing scenario, researchers have extended classic DP to quantum differential privacy (QDP) by considering the quantum noise.
We show that shot noise can effectively provide privacy protection in quantum computing.
arXiv Detail & Related papers (2023-12-13T15:27:26Z) - A unifying framework for differentially private quantum algorithms [0.0]
We propose a novel and general definition of neighbouring quantum states.
We demonstrate that this definition captures the underlying structure of quantum encodings.
We also investigate an alternative setting where we are provided with multiple copies of the input state.
arXiv Detail & Related papers (2023-07-10T17:44:03Z) - Quantum Pufferfish Privacy: A Flexible Privacy Framework for Quantum Systems [19.332726520752846]
We propose a versatile privacy framework for quantum systems, termed quantum pufferfish privacy (QPP)
Inspired by classical pufferfish privacy, our formulation generalizes and addresses limitations of quantum differential privacy.
We show that QPP can be equivalently formulated in terms of the Datta-Leditzky information spectrum divergence.
arXiv Detail & Related papers (2023-06-22T17:21:17Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Quantum Differential Privacy: An Information Theory Perspective [2.9005223064604073]
We discuss differential privacy in an information theoretic framework by casting it as a quantum divergence.
A main advantage of this approach is that differential privacy becomes a property solely based on the output states of the computation.
arXiv Detail & Related papers (2022-02-22T08:12:50Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.