Projection Valued Measure-based Quantum Machine Learning for Multi-Class
Classification
- URL: http://arxiv.org/abs/2210.16731v1
- Date: Sun, 30 Oct 2022 03:12:53 GMT
- Title: Projection Valued Measure-based Quantum Machine Learning for Multi-Class
Classification
- Authors: Won Joon Yun, Hankyul Baek, and Joongheon Kim
- Abstract summary: We propose a novel framework for multi-class classification using projection-valued measure (PVM)
Our framework outperforms the state-of-theart (SOTA) with various datasets using no more than 6 qubits.
- Score: 10.90994913062223
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, quantum machine learning (QML) has been actively used for
various tasks, e.g., classification, reinforcement learning, and adversarial
learning. However, these QML studies do not achieve complex tasks due to
scalability issues on input and output are the biggest hurdle in QML. To cope
with this problem, we aim to solve the output scalability issue. Motivated by
this challenge, we focus on projection-valued measure (PVM) which utilizes the
nature of probability amplitude in quantum statistical mechanics. By leveraging
PVM, the output dimension is expanded from the number of qubits $q$ to
$\mathcal{O}(2^q)$. We propose a novel QML framework for multi-class
classification. We corroborate that our framework outperforms the
state-of-theart (SOTA) with various datasets using no more than 6 qubits.
Furthermore, our PVM-based QML outperforms 42.2% SOTA.
Related papers
- Benchmarking Quantum Generative Learning: A Study on Scalability and Noise Resilience using QUARK [0.3624329910445628]
This paper investigates the scalability and noise resilience of quantum generative learning applications.
We employ rigorous benchmarking techniques to track progress and identify challenges in scaling QML algorithms.
We show that QGANs are not as affected by the curse of dimensionality as QCBMs and to which extent QCBMs are resilient to noise.
arXiv Detail & Related papers (2024-03-27T15:05:55Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Do Emergent Abilities Exist in Quantized Large Language Models: An
Empirical Study [90.34226812493083]
This work aims to investigate the impact of quantization on emphemergent abilities, which are important characteristics that distinguish LLMs from small language models.
Our empirical experiments show that these emergent abilities still exist in 4-bit quantization models, while 2-bit models encounter severe performance degradation.
To improve the performance of low-bit models, we conduct two special experiments: (1) fine-gained impact analysis that studies which components (or substructures) are more sensitive to quantization, and (2) performance compensation through model fine-tuning.
arXiv Detail & Related papers (2023-07-16T15:11:01Z) - An Empirical Study of Bugs in Quantum Machine Learning Frameworks [5.868747298750261]
We inspect 391 real-world bugs collected from 22 open-source repositories of nine popular QML frameworks.
28% of the bugs are quantum-specific, such as erroneous unitary matrix implementation.
We manually distilled a taxonomy of five symptoms and nine root cause of bugs in QML platforms.
arXiv Detail & Related papers (2023-06-10T07:26:34Z) - Enhancing Quantum Support Vector Machines through Variational Kernel
Training [0.0]
This paper focuses on the two existing quantum kernel SVM and quantum variational SVM methods.
We present a novel approach that synergizes the strengths of QK-SVM and QV-SVM to enhance accuracy.
Our results demonstrate that QVK-SVM holds tremendous potential as a reliable and transformative tool for QML applications.
arXiv Detail & Related papers (2023-05-10T11:30:43Z) - MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent
Reinforcement Learning [63.46052494151171]
We propose textitmulti-agent alternate Q-learning (MA2QL), where agents take turns to update their Q-functions by Q-learning.
We prove that when each agent guarantees a $varepsilon$-convergence at each turn, their joint policy converges to a Nash equilibrium.
Results show MA2QL consistently outperforms IQL, which verifies the effectiveness of MA2QL, despite such minimal changes.
arXiv Detail & Related papers (2022-09-17T04:54:32Z) - Deterministic and random features for large-scale quantum kernel machine [0.9404723842159504]
We show that the quantum kernel method (QKM) can be made scalable by using our proposed deterministic and random features.
Our numerical experiment, using datasets including $O(1,000) sim O(10,000)$ training data, supports the validity of our method.
arXiv Detail & Related papers (2022-09-05T13:22:34Z) - QSAN: A Near-term Achievable Quantum Self-Attention Network [73.15524926159702]
Self-Attention Mechanism (SAM) is good at capturing the internal connections of features.
A novel Quantum Self-Attention Network (QSAN) is proposed for image classification tasks on near-term quantum devices.
arXiv Detail & Related papers (2022-07-14T12:22:51Z) - Subtleties in the trainability of quantum machine learning models [0.0]
We show that gradient scaling results for Variational Quantum Algorithms can be applied to study the gradient scaling of Quantum Machine Learning models.
Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML.
arXiv Detail & Related papers (2021-10-27T20:28:53Z) - Quantum circuit architecture search for variational quantum algorithms [88.71725630554758]
We propose a resource and runtime efficient scheme termed quantum architecture search (QAS)
QAS automatically seeks a near-optimal ansatz to balance benefits and side-effects brought by adding more noisy quantum gates.
We implement QAS on both the numerical simulator and real quantum hardware, via the IBM cloud, to accomplish data classification and quantum chemistry tasks.
arXiv Detail & Related papers (2020-10-20T12:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.