Understanding the Resource Cost of Fully Homomorphic Encryption in Quantum Federated Learning
- URL: http://arxiv.org/abs/2603.02799v1
- Date: Tue, 03 Mar 2026 09:36:55 GMT
- Title: Understanding the Resource Cost of Fully Homomorphic Encryption in Quantum Federated Learning
- Authors: Lukas Böhm, Arjhun Swaminathan, Anika Hannemann, Erik Buchmann,
- Abstract summary: Homomorphic encryption of parameters has been proposed as a solution in Quantum Federated Learning (QFL)<n>We evaluate the overhead introduced by Fully Homomorphic Encryption (FHE) in QFL setups and assess its feasibility for real-world applications.
- Score: 1.1666234644810893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum Federated Learning (QFL) enables distributed training of Quantum Machine Learning (QML) models by sharing model gradients instead of raw data. However, these gradients can still expose sensitive user information. To enhance privacy, homomorphic encryption of parameters has been proposed as a solution in QFL and related frameworks. In this work, we evaluate the overhead introduced by Fully Homomorphic Encryption (FHE) in QFL setups and assess its feasibility for real-world applications. We implemented various QML models including a Quantum Convolutional Neural Network (QCNN) trained in a federated environment with parameters encrypted using the CKKS scheme. This work marks the first QCNN trained in a federated setting with CKKS-encrypted parameters. Models of varying architectures were trained to predict brain tumors from MRI scans. The experiments reveal that memory and communication overhead remain substantial, making FHE challenging to deploy. Minimizing overhead requires reducing the number of model parameters, which, however, leads to a decline in classification performance, introducing a trade-off between privacy and model complexity.
Related papers
- Quantum LEGO Learning: A Modular Design Principle for Hybrid Artificial Intelligence [63.39968536637762]
We introduce Quantum LEGO Learning, a learning framework that treats classical and quantum components as reusable, composable learning blocks.<n>Within this framework, a pre-trained classical neural network serves as a frozen feature block, while a VQC acts as a trainable adaptive module.<n>We develop a block-wise generalization theory that decomposes learning error into approximation and estimation components.
arXiv Detail & Related papers (2026-01-29T14:29:21Z) - PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing [48.30406812516552]
We introduce the PLM, a Peripheral Language Model, developed through a co-design process that jointly optimize model architecture and edge system constraints.<n>PLM employs a Multi-head Latent Attention mechanism and employs the squared ReLU activation function to encourage sparsity, thereby reducing peak memory footprint.<n> evaluation results demonstrate that PLM outperforms existing small language models trained on publicly available data.
arXiv Detail & Related papers (2025-03-15T15:11:17Z) - Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models [2.348041867134616]
Quantum Machine Learning (QML) amalgamates quantum computing paradigms with machine learning models.
With the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance.
We assume the untrusted quantum cloud provider is an adversary having white-box access to the transpiled user-designed trained QML model during inference.
arXiv Detail & Related papers (2024-07-09T21:35:19Z) - ReasoningLM: Enabling Structural Subgraph Reasoning in Pre-trained
Language Models for Question Answering over Knowledge Graph [142.42275983201978]
We propose a subgraph-aware self-attention mechanism to imitate the GNN for performing structured reasoning.
We also adopt an adaptation tuning strategy to adapt the model parameters with 20,000 subgraphs with synthesized questions.
Experiments show that ReasoningLM surpasses state-of-the-art models by a large margin, even with fewer updated parameters and less training data.
arXiv Detail & Related papers (2023-12-30T07:18:54Z) - Federated Quantum Long Short-term Memory (FedQLSTM) [58.50321380769256]
Quantum federated learning (QFL) can facilitate collaborative learning across multiple clients using quantum machine learning (QML) models.
No prior work has focused on developing a QFL framework that utilizes temporal data to approximate functions.
A novel QFL framework that is the first to integrate quantum long short-term memory (QLSTM) models with temporal data is proposed.
arXiv Detail & Related papers (2023-12-21T21:40:47Z) - Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective [106.92016199403042]
We empirically investigate knowledge transfer from larger to smaller models through a parametric perspective.
We employ sensitivity-based techniques to extract and align knowledge-specific parameters between different large language models.
Our findings highlight the critical factors contributing to the process of parametric knowledge transfer.
arXiv Detail & Related papers (2023-10-17T17:58:34Z) - Expressive variational quantum circuits provide inherent privacy in
federated learning [2.3255115473995134]
Federated learning has emerged as a viable solution to train machine learning models without the need to share data with the central aggregator.
Standard neural network-based federated learning models have been shown to be susceptible to data leakage from the gradients shared with the server.
We show that expressive maps lead to inherent privacy against gradient inversion attacks.
arXiv Detail & Related papers (2023-09-22T17:04:50Z) - Reflection Equivariant Quantum Neural Networks for Enhanced Image
Classification [0.7232471205719458]
We build new machine learning models which explicitly respect the symmetries inherent in their data, so-called geometric quantum machine learning (GQML)
We find that these networks are capable of consistently and significantly outperforming generic ansatze on complicated real-world image datasets.
arXiv Detail & Related papers (2022-12-01T04:10:26Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Federated Learning with Quantum Secure Aggregation [23.385315728881295]
The scheme is secure in protecting private model parameters from being disclosed to semi-honest attackers.
The proposed security mechanism ensures that any attempts to eavesdrop private model parameters can be immediately detected and stopped.
arXiv Detail & Related papers (2022-07-09T13:21:36Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.