Q-Detection: A Quantum-Classical Hybrid Poisoning Attack Detection Method
- URL: http://arxiv.org/abs/2507.06262v1
- Date: Mon, 07 Jul 2025 18:43:34 GMT
- Title: Q-Detection: A Quantum-Classical Hybrid Poisoning Attack Detection Method
- Authors: Haoqi He, Xiaokai Lin, Jiancai Chen, Yan Xiao,
- Abstract summary: Data poisoning attacks pose significant threats to machine learning models.<n>We present Q-Detection, a quantum-classical hybrid defense method for detecting poisoning attacks.<n>Q-Detection also introduces the Q-WAN, which is optimized using quantum computing devices.
- Score: 1.9914441103508185
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Data poisoning attacks pose significant threats to machine learning models by introducing malicious data into the training process, thereby degrading model performance or manipulating predictions. Detecting and sifting out poisoned data is an important method to prevent data poisoning attacks. Limited by classical computation frameworks, upcoming larger-scale and more complex datasets may pose difficulties for detection. We introduce the unique speedup of quantum computing for the first time in the task of detecting data poisoning. We present Q-Detection, a quantum-classical hybrid defense method for detecting poisoning attacks. Q-Detection also introduces the Q-WAN, which is optimized using quantum computing devices. Experimental results using multiple quantum simulation libraries show that Q-Detection effectively defends against label manipulation and backdoor attacks. The metrics demonstrate that Q-Detection consistently outperforms the baseline methods and is comparable to the state-of-the-art. Theoretical analysis shows that Q-Detection is expected to achieve more than a 20% speedup using quantum computing power.
Related papers
- Detection of Physiological Data Tampering Attacks with Quantum Machine Learning [0.4604003661048266]
This study compares the effectiveness of Quantum Machine Learning (QML) for detecting physiological data tampering.<n>QML models are better at identifying label-flipping attacks, achieving accuracy rates of 75%-95% depending on the data and attack severity.<n>However, both QML and classical models struggle to detect more sophisticated adversarial perturbation attacks.
arXiv Detail & Related papers (2025-02-09T17:26:41Z) - Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable Trigger [76.36315347198195]
No-Reference Image Quality Assessment (NR-IQA) plays a critical role in evaluating and optimizing computer vision systems.<n>Recent research indicates that NR-IQA models are susceptible to adversarial attacks.<n>We present a novel poisoning-based backdoor attack against NR-IQA (BAIQA)
arXiv Detail & Related papers (2024-12-10T08:07:19Z) - Adversarial Data Poisoning Attacks on Quantum Machine Learning in the NISQ Era [2.348041867134616]
A key concern in the Quantum Machine Learning (QML) domain is the threat of data poisoning attacks in the current quantum cloud setting.<n>In this work, we first propose a simple yet effective technique to measure intra-class encoder state similarity (ESS) by analyzing the outputs of encoding circuits.<n>Through extensive experiments conducted in both noiseless and noisy environments, we introduce a underlineQuantum underlineIndiscriminate underlineData Poisoning attack, QUID.
arXiv Detail & Related papers (2024-11-21T18:46:45Z) - QML-IDS: Quantum Machine Learning Intrusion Detection System [1.2016264781280588]
We present QML-IDS, a novel Intrusion Detection System that combines quantum and classical computing techniques.
QML-IDS employs Quantum Machine Learning(QML) methodologies to analyze network patterns and detect attack activities.
We show that QML-IDS is effective at attack detection and performs well in binary and multiclass classification tasks.
arXiv Detail & Related papers (2024-10-07T13:07:41Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Exploring Unsupervised Anomaly Detection with Quantum Boltzmann Machines
in Fraud Detection [3.955274213382716]
Anomaly detection in Restricted Detection and Response (EDR) is a critical task in cybersecurity programs of large companies.
Classical machine learning approaches to this problem exist, but they frequently show unsatisfactory performance in differentiating malicious from benign anomalies.
A promising approach to attain superior generalization than currently employed machine learning techniques are quantum generative models.
arXiv Detail & Related papers (2023-06-08T07:36:01Z) - Exploring Model Dynamics for Accumulative Poisoning Discovery [62.08553134316483]
We propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
By implicitly transferring the changes in the data manipulation to that in the model outputs, Memorization Discrepancy can discover the imperceptible poison samples.
We thoroughly explore its properties and propose Discrepancy-aware Sample Correction (DSC) to defend against accumulative poisoning attacks.
arXiv Detail & Related papers (2023-06-06T14:45:24Z) - Transition Role of Entangled Data in Quantum Machine Learning [51.6526011493678]
Entanglement serves as the resource to empower quantum computing.
Recent progress has highlighted its positive impact on learning quantum dynamics.
We establish a quantum no-free-lunch (NFL) theorem for learning quantum dynamics using entangled data.
arXiv Detail & Related papers (2023-06-06T08:06:43Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Analysis and Detectability of Offline Data Poisoning Attacks on Linear
Dynamical Systems [0.30458514384586405]
We study how poisoning impacts the least-squares estimate through the lens of statistical testing.
We propose a stealthy data poisoning attack on the least-squares estimator that can escape classical statistical tests.
arXiv Detail & Related papers (2022-11-16T10:01:03Z) - Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications [3.1698141437031393]
Adversarial attacks such as poisoning attacks have attracted the attention of many machine learning researchers.<n>Traditionally, poisoning attacks attempt to inject adversarial training data in order to manipulate the trained model.<n>In federated learning (FL), data poisoning attacks can be generalized to model poisoning attacks, which cannot be detected by simpler methods due to the lack of access to local training data by the detector.<n>We propose a novel framework for detecting poisoning attacks in FL, which employs a reference model based on a public dataset and an auditor model to detect malicious updates.
arXiv Detail & Related papers (2022-07-18T10:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.