New Insights on Unfolding and Fine-tuning Quantum Federated Learning
- URL: http://arxiv.org/abs/2506.20016v1
- Date: Tue, 24 Jun 2025 21:17:48 GMT
- Title: New Insights on Unfolding and Fine-tuning Quantum Federated Learning
- Authors: Shanika Iroshi Nanayakkara, Shiva Raj Pokhrel,
- Abstract summary: This study addresses the core limitations of Quantum Federated Learning (QFL), streamlining its applicability to any complex challenges such as healthcare and genomic research.<n>By developing self adaptive fine tuning, the proposed method proves particularly effective in critical applications such as gene expression analysis and cancer detection, enhancing diagnostic precision and predictive modeling within quantum systems.
- Score: 12.248184406275405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Client heterogeneity poses significant challenges to the performance of Quantum Federated Learning (QFL). To overcome these limitations, we propose a new approach leveraging deep unfolding, which enables clients to autonomously optimize hyperparameters, such as learning rates and regularization factors, based on their specific training behavior. This dynamic adaptation mitigates overfitting and ensures robust optimization in highly heterogeneous environments where standard aggregation methods often fail. Our framework achieves approximately 90% accuracy, significantly outperforming traditional methods, which typically yield around 55% accuracy, as demonstrated through real-time training on IBM quantum hardware and Qiskit Aer simulators. By developing self adaptive fine tuning, the proposed method proves particularly effective in critical applications such as gene expression analysis and cancer detection, enhancing diagnostic precision and predictive modeling within quantum systems. Our results are attributed to convergence-aware, learnable optimization steps intrinsic to the deep unfolded framework, which maintains the generalization. Hence, this study addresses the core limitations of conventional QFL, streamlining its applicability to any complex challenges such as healthcare and genomic research.
Related papers
- TensoMeta-VQC: A Tensor-Train-Guided Meta-Learning Framework for Robust and Scalable Variational Quantum Computing [60.996803677584424]
TensoMeta-VQC is a novel tensor-train (TT)-guided meta-learning framework designed to improve the robustness and scalability of VQC significantly.<n>Our framework fully delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware.
arXiv Detail & Related papers (2025-08-01T23:37:55Z) - i-QLS: Quantum-supported Algorithm for Least Squares Optimization in Non-Linear Regression [4.737806718785056]
We propose an iterative quantum-assisted least squares (i-QLS) optimization method.<n>We overcome the scalability and precision limitations of prior quantum least squares approaches.<n> Experiments confirm that i-QLS enables near-term quantum hardware to perform regression tasks with improved precision and scalability.
arXiv Detail & Related papers (2025-05-05T17:02:35Z) - Hybrid Quantum Neural Networks with Amplitude Encoding: Advancing Recovery Rate Predictions [6.699192644249841]
Recovery rate prediction plays a pivotal role in bond investment strategies by enhancing risk assessment, optimizing portfolio allocation, and improving pricing accuracy.<n>We propose a hybrid Quantum Machine Learning (QML) model with Amplitude.<n>We evaluate the model on a global recovery rate dataset comprising 1,725 observations from 1996 to 2023.
arXiv Detail & Related papers (2025-01-27T07:27:23Z) - Quantum-Enhanced Attention Mechanism in NLP: A Hybrid Classical-Quantum Approach [0.0]
Transformer-based models have achieved remarkable results in natural language processing (NLP) tasks such as text classification and machine translation.<n>This research proposes a hybrid quantum-classical transformer model that integrates a quantum-enhanced attention mechanism to address these limitations.
arXiv Detail & Related papers (2025-01-26T18:29:06Z) - Q-MAML: Quantum Model-Agnostic Meta-Learning for Variational Quantum Algorithms [4.525216077859531]
We introduce a new framework for optimizing parameterized quantum circuits (PQCs) that employs a classical, inspired by Model-Agnostic Meta-Learning (MAML) technique.<n>Our framework features a classical neural network, called Learner, which interacts with a PQC using the output of Learner as an initial parameter.<n>In the adaptation phase, the framework requires only a few PQC updates to converge to a more accurate value, while the learner remains unchanged.
arXiv Detail & Related papers (2025-01-10T12:07:00Z) - On the Convergence of DP-SGD with Adaptive Clipping [56.24689348875711]
Gradient Descent with gradient clipping is a powerful technique for enabling differentially private optimization.<n>This paper provides the first comprehensive convergence analysis of SGD with quantile clipping (QC-SGD)<n>We show how QC-SGD suffers from a bias problem similar to constant-threshold clipped SGD but can be mitigated through a carefully designed quantile and step size schedule.
arXiv Detail & Related papers (2024-12-27T20:29:47Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Untrained Filtering with Trained Focusing for Superior Quantum Architecture Search [14.288836269941207]
Quantum architecture search (QAS) represents a fundamental challenge in quantum machine learning.
We decompose the search process into dynamic alternating phases of coarse and fine-grained knowledge learning.
QUEST-A develops an evolutionary mechanism with knowledge accumulation and reuse to enhance multi-level knowledge transfer.
arXiv Detail & Related papers (2024-10-31T01:57:14Z) - Neural Projected Quantum Dynamics: a systematic study [0.0]
We address the challenge of simulating unitary quantum dynamics in large systems using Neural Quantum States.<n>This work offers a comprehensive formalization of the projected time-dependent Variational Monte Carlo (p-tVMC) method.
arXiv Detail & Related papers (2024-10-14T17:01:33Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Pseudo-Bayesian Optimization [7.556071491014536]
We study an axiomatic framework that elicits the minimal requirements to guarantee black-box optimization convergence.<n>We show how using simple local regression, and a suitable "randomized prior" construction to quantify uncertainty, not only guarantees convergence but also consistently outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-10-15T07:55:28Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Constrained multi-objective optimization of process design parameters in
settings with scarce data: an application to adhesive bonding [48.7576911714538]
Finding the optimal process parameters for an adhesive bonding process is challenging.
Traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem.
In this research, we successfully applied specific machine learning techniques to emulate the objective and constraint functions.
arXiv Detail & Related papers (2021-12-16T10:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.