A Reinforcement Learning Based Encoder-Decoder Framework for Learning
Stock Trading Rules
- URL: http://arxiv.org/abs/2101.03867v1
- Date: Fri, 8 Jan 2021 13:19:01 GMT
- Title: A Reinforcement Learning Based Encoder-Decoder Framework for Learning
Stock Trading Rules
- Authors: Mehran Taghian, Ahmad Asadi, Reza Safabakhsh
- Abstract summary: A novel end-to-end model is proposed to learn single instrument trading strategies from a long sequence of raw prices of the instrument.
The parameters of the encoder and the decoder structures are learned jointly, which enables the encoder to extract features fitted to the task of the decoder DRL.
Experimental results showed that the proposed model outperforms other state-of-the-art models in highly dynamic environments.
- Score: 0.8594140167290099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A wide variety of deep reinforcement learning (DRL) models have recently been
proposed to learn profitable investment strategies. The rules learned by these
models outperform the previous strategies specially in high frequency trading
environments. However, it is shown that the quality of the extracted features
from a long-term sequence of raw prices of the instruments greatly affects the
performance of the trading rules learned by these models. Employing a neural
encoder-decoder structure to extract informative features from complex input
time-series has proved very effective in other popular tasks like neural
machine translation and video captioning in which the models face a similar
problem. The encoder-decoder framework extracts highly informative features
from a long sequence of prices along with learning how to generate outputs
based on the extracted features. In this paper, a novel end-to-end model based
on the neural encoder-decoder framework combined with DRL is proposed to learn
single instrument trading strategies from a long sequence of raw prices of the
instrument. The proposed model consists of an encoder which is a neural
structure responsible for learning informative features from the input
sequence, and a decoder which is a DRL model responsible for learning
profitable strategies based on the features extracted by the encoder. The
parameters of the encoder and the decoder structures are learned jointly, which
enables the encoder to extract features fitted to the task of the decoder DRL.
In addition, the effects of different structures for the encoder and various
forms of the input sequences on the performance of the learned strategies are
investigated. Experimental results showed that the proposed model outperforms
other state-of-the-art models in highly dynamic environments.
Related papers
- Neural Speech and Audio Coding [19.437080345021105]
The paper explores the integration of model-based and data-driven approaches within the realm of neural speech and audio coding systems.
It introduces a neural network-based signal enhancer designed to post-process existing codecs' output.
The paper examines the use of psychoacoustically calibrated loss functions to train end-to-end neural audio codecs.
arXiv Detail & Related papers (2024-08-13T15:13:21Z) - NASH: A Simple Unified Framework of Structured Pruning for Accelerating
Encoder-Decoder Language Models [29.468888611690346]
We propose a simple and effective framework, NASH, that narrows the encoder and shortens the decoder networks of encoder-decoder models.
Our findings highlight two insights: (1) the number of decoder layers is the dominant factor of inference speed, and (2) low sparsity in the pruned encoder network enhances generation quality.
arXiv Detail & Related papers (2023-10-16T04:27:36Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models [89.98762327725112]
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models(PTMs) with a knowledge-aware graph neural network(GNN) encoder.
Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs.
arXiv Detail & Related papers (2022-05-04T01:27:36Z) - Deep Learning-Based Intra Mode Derivation for Versatile Video Coding [65.96100964146062]
An intelligent intra mode derivation method is proposed in this paper, termed as Deep Learning based Intra Mode Derivation (DLIMD)
The architecture of DLIMD is developed to adapt to different quantization parameter settings and variable coding blocks including non-square ones.
The proposed method can achieve 2.28%, 1.74%, and 2.18% bit rate reduction on average for Y, U, and V components on the platform of Versatile Video Coding (VVC) test model.
arXiv Detail & Related papers (2022-04-08T13:23:59Z) - Neural Data-Dependent Transform for Learned Image Compression [72.86505042102155]
We build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image.
The experimental results show the effectiveness of the proposed neural-syntax design and the continuous online mode decision mechanism.
arXiv Detail & Related papers (2022-03-09T14:56:48Z) - Unsupervised Learning of Neurosymbolic Encoders [40.3575054882791]
We present a framework for the unsupervised learning of neurosymbolic encoders, i.e., encoders obtained by composing neural networks with symbolic programs from a domain-specific language.
Such a framework can naturally incorporate symbolic expert knowledge into the learning process and lead to more interpretable and factorized latent representations than fully neural encoders.
arXiv Detail & Related papers (2021-07-28T02:16:14Z) - A New Modal Autoencoder for Functionally Independent Feature Extraction [6.690183908967779]
A new modal autoencoder (MAE) is proposed by othogonalising the columns of the readout weight matrix.
The results were validated on the MNIST variations and USPS classification benchmark suite.
The new MAE introduces a very simple training principle for autoencoders and could be promising for the pre-training of deep neural networks.
arXiv Detail & Related papers (2020-06-25T13:25:10Z) - Rethinking and Improving Natural Language Generation with Layer-Wise
Multi-View Decoding [59.48857453699463]
In sequence-to-sequence learning, the decoder relies on the attention mechanism to efficiently extract information from the encoder.
Recent work has proposed to use representations from different encoder layers for diversified levels of information.
We propose layer-wise multi-view decoding, where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences.
arXiv Detail & Related papers (2020-05-16T20:00:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.