Quantum Diffusion Models for Few-Shot Learning
- URL: http://arxiv.org/abs/2411.04217v1
- Date: Wed, 06 Nov 2024 19:25:06 GMT
- Title: Quantum Diffusion Models for Few-Shot Learning
- Authors: Ruhan Wang, Ye Wang, Jing Liu, Toshiaki Koike-Akino,
- Abstract summary: We propose three new frameworks employing quantum diffusion model (QDM) as a solution for the few-shot learning.
Experimental results demonstrate that our proposed algorithms significantly outperform existing methods.
- Score: 13.13788757618812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern quantum machine learning (QML) methods involve the variational optimization of parameterized quantum circuits on training datasets, followed by predictions on testing datasets. Most state-of-the-art QML algorithms currently lack practical advantages due to their limited learning capabilities, especially in few-shot learning tasks. In this work, we propose three new frameworks employing quantum diffusion model (QDM) as a solution for the few-shot learning: label-guided generation inference (LGGI); label-guided denoising inference (LGDI); and label-guided noise addition inference (LGNAI). Experimental results demonstrate that our proposed algorithms significantly outperform existing methods.
Related papers
- Learning to Program Quantum Measurements for Machine Learning [10.617463958884528]
Development of high-performance quantum machine learning models requires expert-level expertise.<n>We propose an innovative framework that renders the observable of a quantum system-specifically, the Hermitian matrix-trainable.<n>We demonstrate that the proposed method effectively programs observables dynamically within variational quantum circuits, achieving superior results compared to existing approaches.
arXiv Detail & Related papers (2025-05-18T02:39:22Z) - RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [53.571195477043496]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)
RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.
Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - Learning to Measure Quantum Neural Networks [10.617463958884528]
We introduce a novel approach that makes the observable of the quantum system-specifically, the Hermitian matrix-learnable.
Our method features an end-to-end differentiable learning framework, where the parameterized observable is trained alongside the ordinary quantum circuit parameters.
Using numerical simulations, we show that the proposed method can identify observables for variational quantum circuits that lead to improved outcomes.
arXiv Detail & Related papers (2025-01-10T02:28:19Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Learning Density Functionals from Noisy Quantum Data [0.0]
noisy intermediate-scale quantum (NISQ) devices are used to generate training data for machine learning (ML) models.
We show that a neural-network ML model can successfully generalize from small datasets subject to noise typical of NISQ algorithms.
Our findings suggest a promising pathway for leveraging NISQ devices in practical quantum simulations.
arXiv Detail & Related papers (2024-09-04T17:59:55Z) - Benchmarking Quantum Generative Learning: A Study on Scalability and Noise Resilience using QUARK [0.3624329910445628]
This paper investigates the scalability and noise resilience of quantum generative learning applications.
We employ rigorous benchmarking techniques to track progress and identify challenges in scaling QML algorithms.
We show that QGANs are not as affected by the curse of dimensionality as QCBMs and to which extent QCBMs are resilient to noise.
arXiv Detail & Related papers (2024-03-27T15:05:55Z) - Evolutionary-enhanced quantum supervised learning model [0.0]
This study proposes an evolutionary-enhanced ansatz-free supervised learning model.
In contrast to parametrized circuits, our model employs circuits with variable topology that evolves through an elitist method.
Our framework successfully avoids barren plateaus, resulting in enhanced model accuracy.
arXiv Detail & Related papers (2023-11-14T11:08:47Z) - Do Emergent Abilities Exist in Quantized Large Language Models: An
Empirical Study [90.34226812493083]
This work aims to investigate the impact of quantization on emphemergent abilities, which are important characteristics that distinguish LLMs from small language models.
Our empirical experiments show that these emergent abilities still exist in 4-bit quantization models, while 2-bit models encounter severe performance degradation.
To improve the performance of low-bit models, we conduct two special experiments: (1) fine-gained impact analysis that studies which components (or substructures) are more sensitive to quantization, and (2) performance compensation through model fine-tuning.
arXiv Detail & Related papers (2023-07-16T15:11:01Z) - PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language
Models [52.09865918265002]
We propose a novel quantize before fine-tuning'' framework, PreQuant.
PreQuant is compatible with various quantization strategies, with outlier-aware fine-tuning incorporated to correct the induced quantization error.
We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.
arXiv Detail & Related papers (2023-05-30T08:41:33Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - A preprocessing perspective for quantum machine learning classification
advantage using NISQ algorithms [0.0]
Variational Quantum Algorithm (VQA) shows a gain of performance in balanced accuracy with the LDA technique.
Current quantum computers are noisy and have few qubits to test, making it difficult to demonstrate the current and potential quantum advantage of QML methods.
arXiv Detail & Related papers (2022-08-28T16:58:37Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.