Federated Learning with Quantum Secure Aggregation
- URL: http://arxiv.org/abs/2207.07444v2
- Date: Fri, 15 Sep 2023 06:03:11 GMT
- Title: Federated Learning with Quantum Secure Aggregation
- Authors: Yichi Zhang, Chao Zhang, Cai Zhang, Lixin Fan, Bei Zeng, Qiang Yang
- Abstract summary: The scheme is secure in protecting private model parameters from being disclosed to semi-honest attackers.
The proposed security mechanism ensures that any attempts to eavesdrop private model parameters can be immediately detected and stopped.
- Score: 23.385315728881295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article illustrates a novel Quantum Secure Aggregation (QSA) scheme that
is designed to provide highly secure and efficient aggregation of local model
parameters for federated learning. The scheme is secure in protecting private
model parameters from being disclosed to semi-honest attackers by utilizing
quantum bits i.e. qubits to represent model parameters. The proposed security
mechanism ensures that any attempts to eavesdrop private model parameters can
be immediately detected and stopped. The scheme is also efficient in terms of
the low computational complexity of transmitting and aggregating model
parameters through entangled qubits. Benefits of the proposed QSA scheme are
showcased in a horizontal federated learning setting in which both a
centralized and decentralized architectures are taken into account. It was
empirically demonstrated that the proposed QSA can be readily applied to
aggregate different types of local models including logistic regression (LR),
convolutional neural networks (CNN) as well as quantum neural network (QNN),
indicating the versatility of the QSA scheme. Performances of global models are
improved to various extents with respect to local models obtained by individual
participants, while no private model parameters are disclosed to semi-honest
adversaries.
Related papers
- Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Expressive variational quantum circuits provide inherent privacy in
federated learning [2.3255115473995134]
Federated learning has emerged as a viable solution to train machine learning models without the need to share data with the central aggregator.
Standard neural network-based federated learning models have been shown to be susceptible to data leakage from the gradients shared with the server.
We show that expressive maps lead to inherent privacy against gradient inversion attacks.
arXiv Detail & Related papers (2023-09-22T17:04:50Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Probabilistic Selective Encryption of Convolutional Neural Networks for
Hierarchical Services [13.643603852209091]
We propose a selective encryption (SE) algorithm to protect CNN models from unauthorized access.
Our algorithm selects important model parameters via the proposed Probabilistic Selection Strategy (PSS)
It then encrypts the most important parameters with the designed encryption method called Distribution Preserving Random Mask (DPRM)
arXiv Detail & Related papers (2021-05-26T06:15:58Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.