Modeling Financial Time Series using LSTM with Trainable Initial Hidden
States
- URL: http://arxiv.org/abs/2007.06848v1
- Date: Tue, 14 Jul 2020 06:36:10 GMT
- Title: Modeling Financial Time Series using LSTM with Trainable Initial Hidden
States
- Authors: Jungsik Hwang
- Abstract summary: We introduce a novel approach to modeling financial time series using a deep learning model.
We use a Long Short-Term Memory (LSTM) network equipped with the trainable initial hidden states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting previously unknown patterns and information in time series is
central to many real-world applications. In this study, we introduce a novel
approach to modeling financial time series using a deep learning model. We use
a Long Short-Term Memory (LSTM) network equipped with the trainable initial
hidden states. By learning to reconstruct time series, the proposed model can
represent high-dimensional time series data with its parameters. An experiment
with the Korean stock market data showed that the model was able to capture the
relative similarity between a large number of stock prices in its latent space.
Besides, the model was also able to predict the future stock trends from the
latent space. The proposed method can help to identify relationships among many
time series, and it could be applied to financial applications, such as
optimizing the investment portfolios.
Related papers
- PLUTUS: A Well Pre-trained Large Unified Transformer can Unveil Financial Time Series Regularities [0.848210898747543]
Financial time series modeling is crucial for understanding and predicting market behaviors.
Traditional models struggle to capture complex patterns due to non-linearity, non-stationarity, and high noise levels.
Inspired by the success of large language models in NLP, we introduce $textbfPLUTUS$, a $textbfP$re-trained $textbfL$arge.
PLUTUS is the first open-source, large-scale, pre-trained financial time series model with over one billion parameters.
arXiv Detail & Related papers (2024-08-19T15:59:46Z) - Deep Time Series Models: A Comprehensive Survey and Benchmark [74.28364194333447]
Time series data is of great significance in real-world scenarios.
Recent years have witnessed remarkable breakthroughs in the time series community.
We release Time Series Library (TSLib) as a fair benchmark of deep time series models for diverse analysis tasks.
arXiv Detail & Related papers (2024-07-18T08:31:55Z) - Stochastic Diffusion: A Diffusion Probabilistic Model for Stochastic Time Series Forecasting [8.232475807691255]
We propose a novel Diffusion (StochDiff) model which learns data-driven prior knowledge at each time step.
The learnt prior knowledge helps the model to capture complex temporal dynamics and the inherent uncertainty of the data.
arXiv Detail & Related papers (2024-06-05T00:13:38Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - Lag-Llama: Towards Foundation Models for Probabilistic Time Series
Forecasting [54.04430089029033]
We present Lag-Llama, a general-purpose foundation model for time series forecasting based on a decoder-only transformer architecture.
Lag-Llama is pretrained on a large corpus of diverse time series data from several domains, and demonstrates strong zero-shot generalization capabilities.
When fine-tuned on relatively small fractions of such previously unseen datasets, Lag-Llama achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-12T12:29:32Z) - Time-LLM: Time Series Forecasting by Reprogramming Large Language Models [110.20279343734548]
Time series forecasting holds significant importance in many real-world dynamic systems.
We present Time-LLM, a reprogramming framework to repurpose large language models for time series forecasting.
Time-LLM is a powerful time series learner that outperforms state-of-the-art, specialized forecasting models.
arXiv Detail & Related papers (2023-10-03T01:31:25Z) - Pre-training Enhanced Spatial-temporal Graph Neural Network for
Multivariate Time Series Forecasting [13.441945545904504]
We propose a novel framework, in which STGNN is Enhanced by a scalable time series Pre-training model (STEP)
Specifically, we design a pre-training model to efficiently learn temporal patterns from very long-term history time series.
Our framework is capable of significantly enhancing downstream STGNNs, and our pre-training model aptly captures temporal patterns.
arXiv Detail & Related papers (2022-06-18T04:24:36Z) - A Time Series Analysis-Based Stock Price Prediction Using Machine
Learning and Deep Learning Models [0.0]
We present a very robust and accurate framework of stock price prediction that consists of an agglomeration of statistical, machine learning and deep learning models.
We use the daily stock price data, collected at five minutes interval of time, of a very well known company that is listed in the National Stock Exchange (NSE) of India.
We contend that the agglomerative approach of model building that uses a combination of statistical, machine learning, and deep learning approaches, can very effectively learn from the volatile and random movement patterns in a stock price data.
arXiv Detail & Related papers (2020-04-17T19:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.