Enhancing Quantum Security over Federated Learning via Post-Quantum Cryptography
- URL: http://arxiv.org/abs/2409.04637v1
- Date: Fri, 6 Sep 2024 22:02:08 GMT
- Title: Enhancing Quantum Security over Federated Learning via Post-Quantum Cryptography
- Authors: Pingzhi Li, Tianlong Chen, Junyu Liu,
- Abstract summary: Federated learning (FL) has become one of the standard approaches for deploying machine learning models on edge devices.
Current digital signature algorithms can protect these communicated model updates, but they fail to ensure quantum security in the era of large-scale quantum computing.
In this work, we empirically investigate the impact of these three NIST-standardized PQC algorithms for digital signatures within the FL procedure.
- Score: 38.77135346831741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has become one of the standard approaches for deploying machine learning models on edge devices, where private training data are distributed across clients, and a shared model is learned by aggregating locally computed updates from each client. While this paradigm enhances communication efficiency by only requiring updates at the end of each training epoch, the transmitted model updates remain vulnerable to malicious tampering, posing risks to the integrity of the global model. Although current digital signature algorithms can protect these communicated model updates, they fail to ensure quantum security in the era of large-scale quantum computing. Fortunately, various post-quantum cryptography algorithms have been developed to address this vulnerability, especially the three NIST-standardized algorithms - Dilithium, FALCON, and SPHINCS+. In this work, we empirically investigate the impact of these three NIST-standardized PQC algorithms for digital signatures within the FL procedure, covering a wide range of models, tasks, and FL settings. Our results indicate that Dilithium stands out as the most efficient PQC algorithm for digital signature in federated learning. Additionally, we offer an in-depth discussion of the implications of our findings and potential directions for future research.
Related papers
- Cryptanalysis via Machine Learning Based Information Theoretic Metrics [58.96805474751668]
We propose two novel applications of machine learning (ML) algorithms to perform cryptanalysis on any cryptosystem.
These algorithms can be readily applied in an audit setting to evaluate the robustness of a cryptosystem.
We show that our classification model correctly identifies the encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB, with high accuracy.
arXiv Detail & Related papers (2025-01-25T04:53:36Z) - Privacy-Preserving Cyberattack Detection in Blockchain-Based IoT Systems Using AI and Homomorphic Encryption [22.82443900809095]
This work proposes a privacy-preserving cyberattack detection framework for blockchain-based Internet-of-Things (IoT) systems.
In our approach, artificial intelligence (AI)-driven detection modules are strategically deployed at blockchain nodes to identify real-time attacks.
To safeguard privacy, the data is encrypted using homomorphic encryption (HE) before transmission.
Our simulation results demonstrate that our proposed method can not only mitigate the training time but also achieve detection accuracy that is approximately identical to the approach without encryption, with a gap of around 0.01%.
arXiv Detail & Related papers (2024-12-18T05:46:53Z) - Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning [62.984693936073974]
Value-based reinforcement learning can learn effective policies for a wide range of multi-turn problems.
Current value-based RL methods have proven particularly challenging to scale to the setting of large language models.
We propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning problem.
arXiv Detail & Related papers (2024-11-07T21:36:52Z) - Federated Learning with Quantum Computing and Fully Homomorphic Encryption: A Novel Computing Paradigm Shift in Privacy-Preserving ML [4.92218040320554]
Federated Learning is a privacy-preserving alternative to conventional methods that allow multiple learning clients to share model knowledge without disclosing private data.
This work applies the Fully Homomorphic Encryption scheme to a Federated Learning Neural Network architecture that integrates both classical and quantum layers.
arXiv Detail & Related papers (2024-09-14T01:23:26Z) - Security Concerns in Quantum Machine Learning as a Service [2.348041867134616]
Quantum machine learning (QML) is a category of algorithms that employ variational quantum circuits (VQCs) to tackle machine learning tasks.
Recent discoveries have shown that QML models can effectively generalize from limited training data samples.
QML represents a hybrid model that utilizes both classical and quantum computing resources.
arXiv Detail & Related papers (2024-08-18T18:21:24Z) - EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters [3.9660142560142067]
Federated Learning (FL) enables model training across decentralized devices by communicating solely local model updates to an aggregation server.
FL remains vulnerable to inference attacks during model update transmissions.
We present EncCluster, a novel method that integrates model compression through weight clustering with recent decentralized FE and privacy-enhancing data encoding.
arXiv Detail & Related papers (2024-06-13T14:16:50Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - Privacy-Preserving Federated Learning via System Immersion and Random
Matrix Encryption [4.258856853258348]
Federated learning (FL) has emerged as a privacy solution for collaborative distributed learning where clients train AI models directly on their devices instead of sharing their data with a centralized (potentially adversarial) server.
We propose a Privacy-Preserving Federated Learning (PPFL) framework built on the synergy of matrix encryption and system immersion tools from control theory.
We show that our algorithm provides the same level of accuracy and convergence rate as the standard FL with a negligible cost while revealing no information about clients' data.
arXiv Detail & Related papers (2022-04-05T21:28:59Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.