QFed: Parameter-Compact Quantum-Classical Federated Learning
- URL: http://arxiv.org/abs/2601.09809v1
- Date: Wed, 14 Jan 2026 19:16:20 GMT
- Title: QFed: Parameter-Compact Quantum-Classical Federated Learning
- Authors: Samar Abdelghani, Soumaya Cherkaoui,
- Abstract summary: We introduce QFed, a quantum-enabled federated learning framework aimed at boosting computational efficiency across edge device networks.<n> Experimental results show that QFed achieves a 77.6% reduction in the parameter count of a VGG-like model while maintaining an accuracy comparable to classical approaches in a scalable environment.
- Score: 7.031234391152914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organizations and enterprises across domains such as healthcare, finance, and scientific research are increasingly required to extract collective intelligence from distributed, siloed datasets while adhering to strict privacy, regulatory, and sovereignty requirements. Federated Learning (FL) enables collaborative model building without sharing sensitive raw data, but faces growing challenges posed by statistical heterogeneity, system diversity, and the computational burden from complex models. This study examines the potential of quantum-assisted federated learning, which could cut the number of parameters in classical models by polylogarithmic factors and thus lessen training overhead. Accordingly, we introduce QFed, a quantum-enabled federated learning framework aimed at boosting computational efficiency across edge device networks. We evaluate the proposed framework using the widely adopted FashionMNIST dataset. Experimental results show that QFed achieves a 77.6% reduction in the parameter count of a VGG-like model while maintaining an accuracy comparable to classical approaches in a scalable environment. These results point to the potential of leveraging quantum computing within a federated learning context to strengthen FL capabilities of edge devices.
Related papers
- Towards Heterogeneous Quantum Federated Learning: Challenges and Solutions [47.08625631041616]
Quantum federated learning (QFL) combines quantum computing and federated learning to enable decentralized model training while maintaining data privacy.<n>Existing QFL frameworks largely focus on homogeneity among quantum textcolorblackclients, and they do not account for real-world variances in quantum data distributions, encoding techniques, hardware noise levels, and computational capacity.<n>These differences can create instability during training, slow convergence, and reduce overall model performance.
arXiv Detail & Related papers (2025-11-27T06:35:45Z) - Enhancing Quantum Federated Learning with Fisher Information-Based Optimization [0.0]
We propose a Quantum Federated Learning (QFL) algorithm that makes use of the Fisher information computed on local client models.<n>This approach identifies the critical parameters that significantly influence the quantum model's performance, ensuring they are preserved during the aggregation process.
arXiv Detail & Related papers (2025-07-23T15:14:53Z) - Quantum-Accelerated Neural Imputation with Large Language Models (LLMs) [0.0]
This paper introduces Quantum-UnIMP, a novel framework that integrates shallow quantum circuits into an LLM-based imputation architecture.<n>Our experiments on benchmark mixed-type datasets demonstrate that Quantum-UnIMP reduces imputation error by up to 15.2% for numerical features (RMSE) and improves classification accuracy by 8.7% for categorical features (F1-Score) compared to state-of-the-art classical and LLM-based methods.
arXiv Detail & Related papers (2025-07-11T02:00:06Z) - MQFL-FHE: Multimodal Quantum Federated Learning Framework with Fully Homomorphic Encryption [5.713063730561454]
homomorphic encryption (FHE) in federated learning (FL) has led to significant advances in data privacy.<n>We propose a novel multimodal quantum federated learning framework that utilizes quantum computing to counteract the performance drop resulting from FHE.<n>Our results also demonstrate that the quantum-enhanced approach mitigates the performance degradation associated with FHE and improves classification accuracy across diverse datasets, validating the potential of quantum interventions in enhancing privacy in FL.
arXiv Detail & Related papers (2024-11-30T19:53:25Z) - FedQNN: Federated Learning using Quantum Neural Networks [3.9554540293311864]
This study explores the innovative domain of Quantum Federated Learning (QFL) as a framework for training Quantum Machine Learning (QML) models via distributed networks.
Our proposed Federated Quantum Neural Network (FedQNN) framework emerges as a cutting-edge solution, integrating the singular characteristics of QML with the principles of classical federated learning.
arXiv Detail & Related papers (2024-03-16T08:58:03Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.