Quantum adversarial metric learning model based on triplet loss function
- URL: http://arxiv.org/abs/2303.08293v1
- Date: Wed, 15 Mar 2023 00:56:31 GMT
- Title: Quantum adversarial metric learning model based on triplet loss function
- Authors: Yan-Yan Hou, Jian Li, Xiu-Bo Chen, Chong-Qiang Ye
- Abstract summary: We propose a quantum adversarial metric learning model based on the triplet loss function.
The model employs entanglement and interference to build superposition states for triplet samples.
Simulation results show that the QAML model can effectively distinguish samples of MNIST and Iris datasets.
- Score: 5.548873288570182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metric learning plays an essential role in image analysis and classification,
and it has attracted more and more attention. In this paper, we propose a
quantum adversarial metric learning (QAML) model based on the triplet loss
function, where samples are embedded into the high-dimensional Hilbert space
and the optimal metric is obtained by minimizing the triplet loss function. The
QAML model employs entanglement and interference to build superposition states
for triplet samples so that only one parameterized quantum circuit is needed to
calculate sample distances, which reduces the demand for quantum resources.
Considering the QAML model is fragile to adversarial attacks, an adversarial
sample generation strategy is designed based on the quantum gradient ascent
method, effectively improving the robustness against the functional adversarial
attack. Simulation results show that the QAML model can effectively distinguish
samples of MNIST and Iris datasets and has higher robustness accuracy over the
general quantum metric learning. The QAML model is a fundamental research
problem of machine learning. As a subroutine of classification and clustering
tasks, the QAML model opens an avenue for exploring quantum advantages in
machine learning.
Related papers
- Learning to Measure Quantum Neural Networks [10.617463958884528]
We introduce a novel approach that makes the observable of the quantum system-specifically, the Hermitian matrix-learnable.
Our method features an end-to-end differentiable learning framework, where the parameterized observable is trained alongside the ordinary quantum circuit parameters.
Using numerical simulations, we show that the proposed method can identify observables for variational quantum circuits that lead to improved outcomes.
arXiv Detail & Related papers (2025-01-10T02:28:19Z) - Computable Model-Independent Bounds for Adversarial Quantum Machine Learning [4.857505043608425]
We introduce the first of an approximate lower bound for adversarial error when evaluating model resilience against quantum-based adversarial attacks.
In the best case, the experimental error is only 10% above the estimated bound, offering evidence of the inherent robustness of quantum models.
arXiv Detail & Related papers (2024-11-11T10:56:31Z) - Quantum Kernel Learning for Small Dataset Modeling in Semiconductor Fabrication: Application to Ohmic Contact [18.42230728589117]
We develop a quantum kernel-aligned regressor (QKAR) combining a shallow Pauli-Z feature map with a trainable quantum kernel alignment layer.<n>QKAR consistently outperforms classical baselines across multiple metrics.<n>These findings suggest that carefully constructed QML models could provide predictive advantages in data-constrained semiconductor modeling.
arXiv Detail & Related papers (2024-09-17T00:44:49Z) - The Quantum Imitation Game: Reverse Engineering of Quantum Machine Learning Models [2.348041867134616]
Quantum Machine Learning (QML) amalgamates quantum computing paradigms with machine learning models.
With the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance.
We assume the untrusted quantum cloud provider is an adversary having white-box access to the transpiled user-designed trained QML model during inference.
arXiv Detail & Related papers (2024-07-09T21:35:19Z) - Quantum Active Learning [3.3202982522589934]
Training a quantum neural network typically demands a substantial labeled training set for supervised learning.
QAL effectively trains the model, achieving performance comparable to that on fully labeled datasets.
We elucidate the negative result of QAL being overtaken by random sampling baseline through miscellaneous numerical experiments.
arXiv Detail & Related papers (2024-05-28T14:39:54Z) - Federated Quantum Long Short-term Memory (FedQLSTM) [58.50321380769256]
Quantum federated learning (QFL) can facilitate collaborative learning across multiple clients using quantum machine learning (QML) models.
No prior work has focused on developing a QFL framework that utilizes temporal data to approximate functions.
A novel QFL framework that is the first to integrate quantum long short-term memory (QLSTM) models with temporal data is proposed.
arXiv Detail & Related papers (2023-12-21T21:40:47Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis [0.0]
Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
arXiv Detail & Related papers (2023-05-31T06:31:42Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - QSAN: A Near-term Achievable Quantum Self-Attention Network [73.15524926159702]
Self-Attention Mechanism (SAM) is good at capturing the internal connections of features.
A novel Quantum Self-Attention Network (QSAN) is proposed for image classification tasks on near-term quantum devices.
arXiv Detail & Related papers (2022-07-14T12:22:51Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Structural risk minimization for quantum linear classifiers [0.0]
Quantum machine learning (QML) stands out as one of the typically highlighted candidates for quantum computing's near-term "killer application"
We investigate capacity measures of two closely related QML models called explicit and implicit quantum linear classifiers.
We identify that the rank and Frobenius norm of the observables used in the QML model closely control the model's capacity.
arXiv Detail & Related papers (2021-05-12T10:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.