Adversarial Robustness in Distributed Quantum Machine Learning
- URL: http://arxiv.org/abs/2508.11848v1
- Date: Sat, 16 Aug 2025 00:01:51 GMT
- Title: Adversarial Robustness in Distributed Quantum Machine Learning
- Authors: Pouya Kananian, Hans-Arno Jacobsen,
- Abstract summary: Studying adversarial robustness of quantum machine learning (QML) models is essential to understand their potential advantages over classical models and build trustworthy systems.<n> Distributing QML models allows leveraging multiple quantum processors to overcome the limitations of individual devices and build scalable systems.<n>This work reviews the differences between these distribution methods, summarizes existing approaches on the adversarial robustness of QML models when distributed using each paradigm, and discusses open questions in this area.
- Score: 10.679753825744964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studying adversarial robustness of quantum machine learning (QML) models is essential in order to understand their potential advantages over classical models and build trustworthy systems. Distributing QML models allows leveraging multiple quantum processors to overcome the limitations of individual devices and build scalable systems. However, this distribution can affect their adversarial robustness, potentially making them more vulnerable to new attacks. Key paradigms in distributed QML include federated learning, which, similar to classical models, involves training a shared model on local data and sending only the model updates, as well as circuit distribution methods inherent to quantum computing, such as circuit cutting and teleportation-based techniques. These quantum-specific methods enable the distributed execution of quantum circuits across multiple devices. This work reviews the differences between these distribution methods, summarizes existing approaches on the adversarial robustness of QML models when distributed using each paradigm, and discusses open questions in this area.
Related papers
- Learning Quantum Data Distribution via Chaotic Quantum Diffusion Model [0.0]
We propose a framework that generates projected ensembles via chaotic Hamiltonian time evolution.<n>This method improves trainability and robustness, broadening the applicability of quantum generative modeling.
arXiv Detail & Related papers (2026-02-25T16:09:50Z) - Adversarially Robust Quantum Transfer Learning [1.3113458064027566]
Quantum machine learning (QML) has emerged as a promising area of research for enhancing the performance of classical machine learning systems.<n>This chapter introduces a hybrid quantum-classical architecture that combines the advantages of quantum computing with transfer learning techniques to address high-resolution image classification.
arXiv Detail & Related papers (2025-10-18T02:16:34Z) - VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning [50.95799256262098]
Variational quantum circuits (VQCs) hold promise for quantum machine learning but face challenges in expressivity, trainability, and noise resilience.<n>We propose VQC-MLPNet, a hybrid architecture where a VQC generates the first-layer weights of a classical multilayer perceptron during training, while inference is performed entirely classically.
arXiv Detail & Related papers (2025-06-12T01:38:15Z) - Quantum Knowledge Distillation for Large Language Models [10.023534560183919]
We propose a Quantum knowledge Distillation model for Large Language Models (QD-LLM)<n>In classical simulation, QD-LLM outperforms several mainstream distillation methods on multiple text classification tasks.<n>We deploy the obtained circuits on the Baihua superconducting quantum processor via the Quafu platform to assess practical feasibility.
arXiv Detail & Related papers (2025-05-19T14:56:24Z) - Quantum Latent Diffusion Models [65.16624577812436]
We propose a potential version of a quantum diffusion model that leverages the established idea of classical latent diffusion models.<n>This involves using a traditional autoencoder to reduce images, followed by operations with variational circuits in the latent space.<n>The results demonstrate an advantage in using a quantum version, as evidenced by obtaining better metrics for the images generated by the quantum version.
arXiv Detail & Related papers (2025-01-19T21:24:02Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - Feature Importance and Explainability in Quantum Machine Learning [0.0]
Many Machine Learning (ML) models are referred to as black box models, providing no real insights into why a prediction is made.
This article explores feature importance and explainability in Quantum Machine Learning (QML) compared to Classical ML models.
arXiv Detail & Related papers (2024-05-14T19:12:32Z) - A Novel Stochastic LSTM Model Inspired by Quantum Machine Learning [0.0]
Works in quantum machine learning (QML) over the past few years indicate that QML algorithms can function just as well as their classical counterparts.
This work aims to elucidate if it is possible to achieve some of QML's major reported benefits on classical machines by incorporating itsity.
arXiv Detail & Related papers (2023-05-17T13:44:25Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Copula-based Risk Aggregation with Trapped Ion Quantum Computers [1.541403735141431]
Copulas are mathematical tools for modeling joint probability distributions.
Recent finding that copulas can be expressed as maximally entangled quantum states has revealed a promising approach to practical quantum advantages.
We study the training of QCBMs with different levels of precision and circuit design on a simulator and a state-of-the-art trapped ion quantum computer.
arXiv Detail & Related papers (2022-06-23T18:39:30Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.