Quantum Adversarial Learning for Kernel Methods
- URL: http://arxiv.org/abs/2404.05824v1
- Date: Mon, 8 Apr 2024 19:23:17 GMT
- Title: Quantum Adversarial Learning for Kernel Methods
- Authors: Giuseppe Montalbano, Leonardo Banchi,
- Abstract summary: We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable to adversarial attacks.
Simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable against adversarial attacks, namely small engineered perturbations of the input data can deceive the classifier into predicting the wrong result. Nonetheless, we also show that simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks. Our results find applications in security-critical learning problems and in mitigating the effect of some forms of quantum noise, since the attacker can also be understood as part of the surrounding environment.
Related papers
- Adversarial Robustness Guarantees for Quantum Classifiers [0.4934360430803066]
We show that quantum properties of QML algorithms can confer fundamental protections against such attacks.
We leverage tools from many-body physics to identify the quantum sources of this protection.
arXiv Detail & Related papers (2024-05-16T18:00:01Z) - Universal adversarial perturbations for multiple classification tasks
with quantum classifiers [0.0]
Quantum adversarial machine learning studies the vulnerability of quantum learning systems against adversarial perturbations.
In this paper, we explore the quantum universal perturbations in the context of heterogeneous classification tasks.
We find that quantum classifiers that achieve almost state-of-the-art accuracy on two different classification tasks can be both conclusively deceived by one carefully-crafted universal perturbation.
arXiv Detail & Related papers (2023-06-21T02:02:41Z) - Enhancing Quantum Adversarial Robustness by Randomized Encodings [10.059889429655582]
We propose a scheme to protect quantum learning systems from adversarial attacks by randomly encoding the legitimate data samples.
We prove that both global and local random unitary encoders lead to exponentially vanishing gradients.
We show that random black-box quantum error correction encoders can protect quantum classifiers against local adversarial noises.
arXiv Detail & Related papers (2022-12-05T19:00:08Z) - Certified Robustness of Quantum Classifiers against Adversarial Examples
through Quantum Noise [68.1992787416233]
We show that adding quantum random rotation noise can improve robustness in quantum classifiers against adversarial attacks.
We derive a certified robustness bound to enable quantum classifiers to defend against adversarial examples.
arXiv Detail & Related papers (2022-11-02T05:17:04Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Quantum noise protects quantum classifiers against adversaries [120.08771960032033]
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies.
We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived.
This is the first quantum protocol that can be used against the most general adversaries.
arXiv Detail & Related papers (2020-03-20T17:56:14Z) - Quantum Adversarial Machine Learning [0.0]
Adrial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings.
In this paper, we explore different adversarial scenarios in the context of quantum machine learning.
We find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples.
arXiv Detail & Related papers (2019-12-31T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.