Mean Absolute Directional Loss as a New Loss Function for Machine
Learning Problems in Algorithmic Investment Strategies
- URL: http://arxiv.org/abs/2309.10546v1
- Date: Tue, 19 Sep 2023 11:52:13 GMT
- Title: Mean Absolute Directional Loss as a New Loss Function for Machine
Learning Problems in Algorithmic Investment Strategies
- Authors: Jakub Micha\'nk\'ow, Pawe{\l} Sakowski, Robert \'Slepaczuk
- Abstract summary: This paper investigates the issue of an adequate loss function in the optimization of machine learning models used in the forecasting of financial time series.
We propose the Mean Absolute Directional Loss function, solving important problems of classical forecast error functions in extracting information from forecasts to create efficient buy/sell signals.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the issue of an adequate loss function in the
optimization of machine learning models used in the forecasting of financial
time series for the purpose of algorithmic investment strategies (AIS)
construction. We propose the Mean Absolute Directional Loss (MADL) function,
solving important problems of classical forecast error functions in extracting
information from forecasts to create efficient buy/sell signals in algorithmic
investment strategies. Finally, based on the data from two different asset
classes (cryptocurrencies: Bitcoin and commodities: Crude Oil), we show that
the new loss function enables us to select better hyperparameters for the LSTM
model and obtain more efficient investment strategies, with regard to
risk-adjusted return metrics on the out-of-sample data.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Statistical arbitrage in multi-pair trading strategy based on graph clustering algorithms in US equities market [0.0]
The study seeks to develop an effective strategy based on the novel framework of statistical arbitrage based on graph clustering algorithms.
The study seeks to provide an integrated approach to optimal signal detection and risk management.
arXiv Detail & Related papers (2024-06-15T17:25:32Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Direct Acquisition Optimization for Low-Budget Active Learning [15.355195433709717]
Active Learning (AL) has gained prominence in integrating data-intensive machine learning (ML) models into domains with limited labeled data.
In this paper, we first empirically observe the performance degradation of existing AL algorithms in the low-budget settings.
We then introduce Direct Acquisition Optimization (DAO), a novel AL algorithm that optimize sample selections based on expected true loss reduction.
arXiv Detail & Related papers (2024-02-08T20:36:21Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Analysis of frequent trading effects of various machine learning models [8.975239844705415]
The proposed algorithm employs neural network predictions to generate trading signals and execute buy and sell operations.
By harnessing the power of neural networks, the algorithm enhances the accuracy and reliability of the trading strategy.
arXiv Detail & Related papers (2023-09-14T05:17:09Z) - Optimizing Stock Option Forecasting with the Assembly of Machine
Learning Models and Improved Trading Strategies [9.553857741758742]
This paper introduced key aspects of applying Machine Learning (ML) models, improved trading strategies, and the Quasi-Reversibility Method (QRM) to optimize stock option forecasting and trading results.
arXiv Detail & Related papers (2022-11-29T04:01:16Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z) - Learning Adaptive Loss for Robust Learning with Noisy Labels [59.06189240645958]
Robust loss is an important strategy for handling robust learning issue.
We propose a meta-learning method capable of robust hyper tuning.
Four kinds of SOTA loss functions are attempted to be minimization, general availability and effectiveness.
arXiv Detail & Related papers (2020-02-16T00:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.