Quantum-secure multiparty deep learning
- URL: http://arxiv.org/abs/2408.05629v2
- Date: Fri, 13 Sep 2024 10:49:21 GMT
- Title: Quantum-secure multiparty deep learning
- Authors: Kfir Sulimany, Sri Krishna Vadlamani, Ryan Hamerly, Prahlad Iyengar, Dirk Englund,
- Abstract summary: We introduce a linear algebra engine that leverages the quantum nature of light for information-theoretically secure multiparty computation.
We apply this engine to deep learning and derive rigorous upper bounds on the information leakage of both the deep neural network weights and the client's data.
Our work lays the foundation for practical quantum-secure computation and unlocks secure cloud deep learning as a field.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Secure multiparty computation enables the joint evaluation of multivariate functions across distributed users while ensuring the privacy of their local inputs. This field has become increasingly urgent due to the exploding demand for computationally intensive deep learning inference. These computations are typically offloaded to cloud computing servers, leading to vulnerabilities that can compromise the security of the clients' data. To solve this problem, we introduce a linear algebra engine that leverages the quantum nature of light for information-theoretically secure multiparty computation using only conventional telecommunication components. We apply this linear algebra engine to deep learning and derive rigorous upper bounds on the information leakage of both the deep neural network weights and the client's data via the Holevo and the Cram\'er-Rao bounds, respectively. Applied to the MNIST classification task, we obtain test accuracies exceeding $96\%$ while leaking less than $0.1$ bits per weight symbol and $0.01$ bits per data symbol. This weight leakage is an order of magnitude below the minimum bit precision required for accurate deep learning using state-of-the-art quantization techniques. Our work lays the foundation for practical quantum-secure computation and unlocks secure cloud deep learning as a field.
Related papers
- Training quantum machine learning models on cloud without uploading the data [0.0]
We propose a method that runs the parameterized quantum circuits before encoding the input data.
This enables a dataset owner to train machine learning models on quantum cloud platforms.
It is also capable of encoding a vast amount of data effectively at a later time using classical computations.
arXiv Detail & Related papers (2024-09-06T20:14:52Z) - NeuroPlug: Plugging Side-Channel Leaks in NPUs using Space Filling Curves [0.4143603294943439]
All published countermeasures (CMs) add noise N to a signal X.
We show that it is easy to filter this noise out using targeted measurements, statistical analyses and different kinds of reasonably-assumed side information.
We present a novel CM NeuroPlug that is immune to these attack methodologies mainly because we use a different formulation CX + N.
arXiv Detail & Related papers (2024-07-18T10:40:41Z) - Fast Flux-Activated Leakage Reduction for Superconducting Quantum
Circuits [84.60542868688235]
leakage out of the computational subspace arising from the multi-level structure of qubit implementations.
We present a resource-efficient universal leakage reduction unit for superconducting qubits using parametric flux modulation.
We demonstrate that using the leakage reduction unit in repeated weight-two stabilizer measurements reduces the total number of detected errors in a scalable fashion.
arXiv Detail & Related papers (2023-09-13T16:21:32Z) - Training quantum neural networks using the Quantum Information
Bottleneck method [0.6768558752130311]
We provide a concrete method for training a quantum neural network to maximize the relevant information about a property that is transmitted through the network.
This is significant because it gives an operationally well founded quantity to optimize when training autoencoders for problems where the inputs and outputs are fully quantum.
arXiv Detail & Related papers (2022-12-05T21:11:32Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Communication-efficient Quantum Algorithm for Distributed Machine
Learning [14.546892420890943]
Our quantum algorithm finds the model parameters with a communication complexity of $O(fraclog_2(N)epsilon)$, where $N$ is the number of data points and $epsilon$ is the bound on parameter errors.
The building block of our algorithm, the quantum-accelerated estimation of distributed inner product and Hamming distance, could be further applied to various tasks in distributed machine learning to accelerate communication.
arXiv Detail & Related papers (2022-09-11T15:03:58Z) - Quantum Heterogeneous Distributed Deep Learning Architectures: Models,
Discussions, and Applications [13.241451755566365]
Quantum deep learning (QDL) and distributed deep learning (DDL) are emerging to complement existing deep learning methods.
QDL takes computational gains by replacing deep learning computations on local devices and servers with quantum deep learning.
It can increase data security by using a quantum secure communication protocol between the server and the client.
arXiv Detail & Related papers (2022-02-19T12:59:11Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z) - Statistical Limits of Supervised Quantum Learning [90.0289160657379]
We show that if the bound on the accuracy is taken into account, quantum machine learning algorithms for supervised learning cannot achieve polylogarithmic runtimes in the input dimension.
We conclude that, when no further assumptions on the problem are made, quantum machine learning algorithms for supervised learning can have at most speedups over efficient classical algorithms.
arXiv Detail & Related papers (2020-01-28T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.