Gated Deeper Models are Effective Factor Learners
- URL: http://arxiv.org/abs/2305.10693v1
- Date: Thu, 18 May 2023 04:07:47 GMT
- Title: Gated Deeper Models are Effective Factor Learners
- Authors: Jingjing Guo
- Abstract summary: We present a 5-layer deep neural network that generates more meaningful factors in a 2048-dimensional space.
We evaluate our model over 2,000 stocks from the China market with their recent three years records.
- Score: 0.9137554315375922
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Precisely forecasting the excess returns of an asset (e.g., Tesla stock) is
beneficial to all investors. However, the unpredictability of market dynamics,
influenced by human behaviors, makes this a challenging task. In prior
research, researcher have manually crafted among of factors as signals to guide
their investing process. In contrast, this paper view this problem in a
different perspective that we align deep learning model to combine those human
designed factors to predict the trend of excess returns. To this end, we
present a 5-layer deep neural network that generates more meaningful factors in
a 2048-dimensional space. Modern network design techniques are utilized to
enhance robustness training and reduce overfitting. Additionally, we propose a
gated network that dynamically filters out noise-learned features, resulting in
improved performance. We evaluate our model over 2,000 stocks from the China
market with their recent three years records. The experimental results show
that the proposed gated activation layer and the deep neural network could
effectively overcome the problem. Specifically, the proposed gated activation
layer and deep neural network contribute to the superior performance of our
model. In summary, the proposed model exhibits promising results and could
potentially benefit investors seeking to optimize their investment strategies.
Related papers
- Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Bi-LSTM Price Prediction based on Attention Mechanism [2.455751370157653]
We propose a bidirectional LSTM neural network based on an attention mechanism, which is based on two popular assets, gold and bitcoin.
Using the forecast results, we achieved a return of 1089.34% in two years.
We also compare the attention Bi-LSTM model proposed in this paper with the traditional model, and the results show that our model has the best performance in this data set.
arXiv Detail & Related papers (2022-12-07T03:56:11Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Optimal consumption-investment choices under wealth-driven risk aversion [0.0]
CRRA utility where the risk aversion is a constant is commonly seen in various economics models.
This paper mainly focus on numerical solutions to the optimal consumption-investment choices under wealth-driven aversion done by neural network.
arXiv Detail & Related papers (2022-10-03T14:07:11Z) - Embedding-based neural network for investment return prediction [5.114559245995975]
In recent years, deep learning are developing rapidly, and investment return prediction based on deep learning has become an emerging research topic.
This paper proposes an embedding-based dual branch approach to predict an investment's return.
The results demonstrate the superiority of our approach compared to Xgboost, Lightgbm and Catboost.
arXiv Detail & Related papers (2022-09-26T17:20:24Z) - Deep reinforcement learning for portfolio management based on the
empirical study of chinese stock market [3.5952664589125916]
This paper is to verify that current cutting-edge technology, deep reinforcement learning, can be applied to portfolio management.
In experiments, we use our model in several randomly selected portfolios which include CSI300 that represents the market's rate of return and the randomly selected constituents of CSI500.
arXiv Detail & Related papers (2020-12-26T16:25:20Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.