Accelerating Machine Learning Training Time for Limit Order Book
Prediction
- URL: http://arxiv.org/abs/2206.09041v1
- Date: Fri, 17 Jun 2022 22:52:56 GMT
- Title: Accelerating Machine Learning Training Time for Limit Order Book
Prediction
- Authors: Mark Joseph Bennett
- Abstract summary: Financial firms are interested in simulation to discover whether a given algorithm involving financial machine learning will operate profitably.
For this task, hardware acceleration is expected to speed up the time required for the financial machine learning researcher to obtain the results.
A published Limit Order Book algorithm for predicting stock market direction is our subject, and the machine learning training process can be time-intensive.
In the studied configuration, this leads to significantly faster training time allowing more efficient and extensive model development.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Financial firms are interested in simulation to discover whether a given
algorithm involving financial machine learning will operate profitably. While
many versions of this type of algorithm have been published recently by
researchers, the focus herein is on a particular machine learning training
project due to the explainable nature and the availability of high frequency
market data. For this task, hardware acceleration is expected to speed up the
time required for the financial machine learning researcher to obtain the
results. As the majority of the time can be spent in classifier training, there
is interest in faster training steps. A published Limit Order Book algorithm
for predicting stock market direction is our subject, and the machine learning
training process can be time-intensive especially when considering the
iterative nature of model development. To remedy this, we deploy Graphical
Processing Units (GPUs) produced by NVIDIA available in the data center where
the computer architecture is geared to parallel high-speed arithmetic
operations. In the studied configuration, this leads to significantly faster
training time allowing more efficient and extensive model development.
Related papers
- PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - Machine Learning aided Computer Architecture Design for CNN Inferencing
Systems [0.0]
We develop a technique for forecasting the power and performance of CNNs during inference, with a MAPE of 5.03% and 5.94%, respectively.
Our approach empowers computer architects to estimate power and performance in the early stages of development, reducing the necessity for numerous prototypes.
arXiv Detail & Related papers (2023-08-10T06:17:46Z) - A Survey From Distributed Machine Learning to Distributed Deep Learning [0.356008609689971]
Distributed machine learning has been proposed, which involves distributing the data and algorithm across several machines.
We divide these algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups.
Based on the investigation of the mentioned algorithms, we highlighted the limitations that should be addressed in future research.
arXiv Detail & Related papers (2023-07-11T13:06:42Z) - Towards Optimal VPU Compiler Cost Modeling by using Neural Networks to
Infer Hardware Performances [58.720142291102135]
'VPUNN' is a neural network-based cost model trained on low-level task profiling.
It consistently outperforms the state-of-the-art cost modeling in Intel's line of VPU processors.
arXiv Detail & Related papers (2022-05-09T22:48:39Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning
Approaches and Hardware Acceleration using Intelligent Processing Units [3.04585143845864]
We design multi-horizon forecasting models for limit order book (LOB) data by using deep learning techniques.
Our methods achieve comparable performance to state-of-art algorithms at short prediction horizons.
arXiv Detail & Related papers (2021-05-21T16:06:41Z) - Scheduling Real-time Deep Learning Services as Imprecise Computations [11.611969843191433]
The paper presents an efficient real-time scheduling algorithm for intelligent real-time edge services.
These services perform machine intelligence tasks, such as voice recognition, LIDAR processing, or machine vision.
We show that deep neural network can be cast as imprecise computations, each with a mandatory part and several optional parts.
arXiv Detail & Related papers (2020-11-02T16:43:04Z) - Machine Learning Algorithms for Financial Asset Price Forecasting [0.0]
This study directly compares and contrasts state-of-the-art implementations of modern Machine Learning algorithms on high performance computing infrastructures.
The implemented Machine Learning models - trained on time series data for an entire stock universe - significantly outperform the CAPM on out-of-sample (OOS) test data.
arXiv Detail & Related papers (2020-03-31T18:14:18Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.