A Quantum Neural Network Transfer-Learning Model for Forecasting Problems with Continuous and Discrete Variables
- URL: http://arxiv.org/abs/2503.07633v2
- Date: Tue, 25 Mar 2025 13:35:29 GMT
- Title: A Quantum Neural Network Transfer-Learning Model for Forecasting Problems with Continuous and Discrete Variables
- Authors: Ismael Abdulrahman,
- Abstract summary: This study introduces simple yet effective continuous- and discrete-variable quantum neural network (QNN) models as a transfer-learning approach for forecasting tasks.<n>The CV-QNN features a single quantum layer with two qubits to establish entanglement and utilizes a minimal set of quantum gates.<n>The model's frozen parameters are successfully applied to various forecasting tasks, including energy consumption, traffic flow, weather conditions, and cryptocurrency price prediction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces simple yet effective continuous- and discrete-variable quantum neural network (QNN) models as a transfer-learning approach for forecasting tasks. The CV-QNN features a single quantum layer with two qubits to establish entanglement and utilizes a minimal set of quantum gates, including displacement, rotation, beam splitter, squeezing, and a non-Gaussian cubic-phase gate, with a maximum of eight trainable parameters. A key advantage of this model is its ability to be trained on a single dataset, after which the learned parameters can be transferred to other forecasting problems with little to no fine-tuning. Initially trained on the Kurdistan load demand dataset, the model's frozen parameters are successfully applied to various forecasting tasks, including energy consumption, traffic flow, weather conditions, and cryptocurrency price prediction, demonstrating strong performance. Furthermore, the study introduces a discrete-variable quantum model with an equivalent 2- and 4-wire configuration and presents a performance assessment, showing good but relatively lower effectiveness compared to the continuous-variable model.
Related papers
- Data re-uploading in Quantum Machine Learning for time series: application to traffic forecasting [1.2885961238169932]
We present the first application of quantum data re-uploading in the context of transport forecasting.
This technique allows quantum models to better capture complex patterns, such as traffic dynamics, by repeatedly encoding classical data into a quantum state.
Our results show that hybrid models achieve competitive accuracy with state-of-the-art classical methods, especially when the number of qubits and re-uploading blocks is increased.
arXiv Detail & Related papers (2025-01-22T10:21:00Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Multi-Scale Feature Fusion Quantum Depthwise Convolutional Neural Networks for Text Classification [3.0079490585515343]
We propose a novel quantum neural network (QNN) model based on quantum convolution.
We develop the quantum depthwise convolution that significantly reduces the number of parameters and lowers computational complexity.
We also introduce the multi-scale feature fusion mechanism to enhance model performance by integrating word-level and sentence-level features.
arXiv Detail & Related papers (2024-05-22T10:19:34Z) - Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language
Models [52.09865918265002]
We propose a novel quantize before fine-tuning'' framework, PreQuant.
PreQuant is compatible with various quantization strategies, with outlier-aware fine-tuning incorporated to correct the induced quantization error.
We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.
arXiv Detail & Related papers (2023-05-30T08:41:33Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - A performance characterization of quantum generative models [35.974070202997176]
We compare quantum circuits used for quantum generative modeling.
We learn the underlying probability distribution of the data sets via two popular training methods.
We empirically find that a variant of the discrete architecture, which learns the copula of the probability distribution, outperforms all other methods.
arXiv Detail & Related papers (2023-01-23T11:00:29Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - Hyperparameter Importance of Quantum Neural Networks Across Small
Datasets [1.1470070927586014]
A quantum neural network can play a similar role to a neural network.
Very little is known about suitable circuit architectures for machine learning.
This work introduces new methodologies to study quantum machine learning models.
arXiv Detail & Related papers (2022-06-20T20:26:20Z) - Efficient Quantum Feature Extraction for CNN-based Learning [5.236201168829204]
We propose a quantum-classical deep network structure to enhance classical CNN model discriminability.
We build PQC, which is a more potent function approximator, with more complex structures to capture the features within the receptive field.
The results disclose that the model with ansatz in high expressibility achieves lower cost and higher accuracy.
arXiv Detail & Related papers (2022-01-04T17:04:07Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z) - Phase Detection with Neural Networks: Interpreting the Black Box [58.720142291102135]
Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions.
We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling.
arXiv Detail & Related papers (2020-04-09T17:45:45Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.