Privacy Against Hypothesis-Testing Adversaries for Quantum Computing
- URL: http://arxiv.org/abs/2302.12405v1
- Date: Fri, 24 Feb 2023 02:10:27 GMT
- Title: Privacy Against Hypothesis-Testing Adversaries for Quantum Computing
- Authors: Farhad Farokhi
- Abstract summary: This paper presents a novel definition for data privacy in quantum computing based on quantum hypothesis testing.
The relationship between privacy against hypothesis-testing adversaries, defined in this paper, and quantum differential privacy are then examined.
- Score: 14.095523601311374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel definition for data privacy in quantum computing based on quantum
hypothesis testing is presented in this paper. The parameters in this privacy
notion possess an operational interpretation based on the success/failure of an
omnipotent adversary being able to distinguish the private categories to which
the data belongs using arbitrary measurements on quantum states. Important
properties of post processing and composition are then proved for the new
notion of privacy. The relationship between privacy against hypothesis-testing
adversaries, defined in this paper, and quantum differential privacy are then
examined. It is shown that these definitions are intertwined in some parameter
regimes. This enables us to provide an interpretation for the privacy budget in
quantum differential privacy based on its relationship with privacy against
hypothesis testing adversaries.
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - A Statistical Viewpoint on Differential Privacy: Hypothesis Testing, Representation and Blackwell's Theorem [30.365274034429508]
We argue that differential privacy can be considered a textitpure statistical concept.
$f$-differential privacy is a unified framework for analyzing privacy bounds in data analysis and machine learning.
arXiv Detail & Related papers (2024-09-14T23:47:22Z) - Differential Privacy Preserving Quantum Computing via Projection Operator Measurements [15.024190374248088]
In classical computing, we can incorporate the concept of differential privacy (DP) to meet the standard of privacy preservation.
In the quantum computing scenario, researchers have extended classic DP to quantum differential privacy (QDP) by considering the quantum noise.
We show that shot noise can effectively provide privacy protection in quantum computing.
arXiv Detail & Related papers (2023-12-13T15:27:26Z) - A unifying framework for differentially private quantum algorithms [0.0]
We propose a novel and general definition of neighbouring quantum states.
We demonstrate that this definition captures the underlying structure of quantum encodings.
We also investigate an alternative setting where we are provided with multiple copies of the input state.
arXiv Detail & Related papers (2023-07-10T17:44:03Z) - Quantum Pufferfish Privacy: A Flexible Privacy Framework for Quantum Systems [19.332726520752846]
We propose a versatile privacy framework for quantum systems, termed quantum pufferfish privacy (QPP)
Inspired by classical pufferfish privacy, our formulation generalizes and addresses limitations of quantum differential privacy.
We show that QPP can be equivalently formulated in terms of the Datta-Leditzky information spectrum divergence.
arXiv Detail & Related papers (2023-06-22T17:21:17Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Quantum Differential Privacy: An Information Theory Perspective [2.9005223064604073]
We discuss differential privacy in an information theoretic framework by casting it as a quantum divergence.
A main advantage of this approach is that differential privacy becomes a property solely based on the output states of the computation.
arXiv Detail & Related papers (2022-02-22T08:12:50Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.