The cross-sectional stock return predictions via quantum neural network
and tensor network
- URL: http://arxiv.org/abs/2304.12501v2
- Date: Tue, 27 Feb 2024 04:49:22 GMT
- Title: The cross-sectional stock return predictions via quantum neural network
and tensor network
- Authors: Nozomu Kobayashi, Yoshiyuki Suimon, Koichi Miyamoto, Kosuke Mitarai
- Abstract summary: We investigate the application of quantum and quantum-inspired machine learning algorithms to stock return predictions.
We evaluate the performance of quantum neural network, an algorithm suited for noisy intermediate-scale quantum computers, and tensor network, a quantum-inspired machine learning algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the application of quantum and quantum-inspired
machine learning algorithms to stock return predictions. Specifically, we
evaluate the performance of quantum neural network, an algorithm suited for
noisy intermediate-scale quantum computers, and tensor network, a
quantum-inspired machine learning algorithm, against classical models such as
linear regression and neural networks. To evaluate their abilities, we
construct portfolios based on their predictions and measure investment
performances. The empirical study on the Japanese stock market shows the tensor
network model achieves superior performance compared to classical benchmark
models, including linear and neural network models. Though the quantum neural
network model attains a lowered risk-adjusted excess return than the classical
neural network models over the whole period, both the quantum neural network
and tensor network models have superior performances in the latest market
environment, which suggests the capability of the model's capturing
non-linearity between input features.
Related papers
- Exploring Quantum Neural Networks for Demand Forecasting [0.25128687379089687]
This paper presents an approach for training demand prediction models using quantum neural networks.
A classical recurrent neural network was used to compare the results.
They show a similar predictive capacity between the classical and quantum models.
arXiv Detail & Related papers (2024-10-19T13:01:31Z) - CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - Tensor networks for unsupervised machine learning [9.897828174118974]
We present the Autoregressive Matrix Product States (AMPS), a tensor-network-based model combining the matrix product states from quantum many-body physics and the autoregressive models from machine learning.
We show that the proposed model significantly outperforms the existing tensor-network-based models and the restricted Boltzmann machines.
arXiv Detail & Related papers (2021-06-24T12:51:00Z) - Quantum Optical Convolutional Neural Network: A Novel Image Recognition
Framework for Quantum Computing [0.0]
We report a novel quantum computing based deep learning model, the Quantum Optical Convolutional Neural Network (QOCNN)
We benchmarked this new architecture against a traditional CNN based on the seminal LeNet model.
We conclude that switching to a quantum computing based approach to deep learning may result in comparable accuracies to classical models.
arXiv Detail & Related papers (2020-12-19T23:10:04Z) - Quantum neural networks with deep residual learning [29.929891641757273]
In this paper, a novel quantum neural network with deep residual learning (ResQNN) is proposed.
Our ResQNN is able to learn an unknown unitary and get remarkable performance.
arXiv Detail & Related papers (2020-12-14T18:11:07Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.