Deep learning the Hurst parameter of linear fractional processes and
assessing its reliability
- URL: http://arxiv.org/abs/2401.01789v1
- Date: Wed, 3 Jan 2024 15:42:45 GMT
- Title: Deep learning the Hurst parameter of linear fractional processes and
assessing its reliability
- Authors: D\'aniel Boros, B\'alint Csan\'ady, Iv\'an Ivkovic, L\'or\'ant Nagy,
Andr\'as Luk\'acs, L\'aszl\'o M\'arkus
- Abstract summary: The study focuses on three types of processes: fractional Brownian motion (fBm), fractional Ornstein-Uhlenbeck (fOU) process, and linear fractional stable motions (lfsm)
The work involves a fast generation of extensive datasets for fBm and fOU to train the LSTM network on a large volume of data in a feasible time.
It finds that LSTM outperforms the traditional statistical methods in the case of fBm and fOU processes; however, it has limited accuracy on lfsm processes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research explores the reliability of deep learning, specifically Long
Short-Term Memory (LSTM) networks, for estimating the Hurst parameter in
fractional stochastic processes. The study focuses on three types of processes:
fractional Brownian motion (fBm), fractional Ornstein-Uhlenbeck (fOU) process,
and linear fractional stable motions (lfsm). The work involves a fast
generation of extensive datasets for fBm and fOU to train the LSTM network on a
large volume of data in a feasible time. The study analyses the accuracy of the
LSTM network's Hurst parameter estimation regarding various performance
measures like RMSE, MAE, MRE, and quantiles of the absolute and relative
errors. It finds that LSTM outperforms the traditional statistical methods in
the case of fBm and fOU processes; however, it has limited accuracy on lfsm
processes. The research also delves into the implications of training length
and valuation sequence length on the LSTM's performance. The methodology is
applied by estimating the Hurst parameter in Li-ion battery degradation data
and obtaining confidence bounds for the estimation. The study concludes that
while deep learning methods show promise in parameter estimation of fractional
processes, their effectiveness is contingent on the process type and the
quality of training data.
Related papers
- Scaling Laws for Predicting Downstream Performance in LLMs [75.28559015477137]
This work focuses on the pre-training loss as a more-efficient metric for performance estimation.
We extend the power law analytical function to predict domain-specific pre-training loss based on FLOPs across data sources.
We employ a two-layer neural network to model the non-linear relationship between multiple domain-specific loss and downstream performance.
arXiv Detail & Related papers (2024-10-11T04:57:48Z) - Parameter Estimation of Long Memory Stochastic Processes with Deep Neural Networks [0.0]
We present a purely deep neural network-based approach for estimating long memory parameters of time series models.
Parameters, such as the Hurst exponent, are critical in characterizing the long-range dependence, roughness, and self-similarity of processes.
arXiv Detail & Related papers (2024-10-03T03:14:58Z) - Gradient-Mask Tuning Elevates the Upper Limits of LLM Performance [51.36243421001282]
Gradient-Mask Tuning (GMT) is a method that selectively updates parameters during training based on their gradient information.
Our empirical results across various tasks demonstrate that GMT not only outperforms traditional fine-tuning methods but also elevates the upper limits of LLM performance.
arXiv Detail & Related papers (2024-06-21T17:42:52Z) - Fast Cerebral Blood Flow Analysis via Extreme Learning Machine [4.373558495838564]
We introduce a rapid and precise analytical approach for analyzing cerebral blood flow (CBF) using Diffuse Correlation spectroscopy (DCS)
We assess existing algorithms using synthetic datasets for both semi-infinite and multi-layer models.
Results demonstrate that ELM consistently achieves higher fidelity across various noise levels and optical parameters, showcasing robust generalization ability and outperforming iterative fitting algorithms.
arXiv Detail & Related papers (2024-01-10T23:01:35Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Can recurrent neural networks learn process model structure? [0.2580765958706854]
We introduce an evaluation framework that combines variant-based resampling and custom metrics for fitness, precision and generalization.
We confirm that LSTMs can struggle to learn process model structure, even with simplistic process data.
We also found that decreasing the amount of information seen by the LSTM during training, causes a sharp drop in generalization and precision scores.
arXiv Detail & Related papers (2022-12-13T08:40:01Z) - Embed and Emulate: Learning to estimate parameters of dynamical systems
with uncertainty quantification [11.353411236854582]
This paper explores learning emulators for parameter estimation with uncertainty estimation of high-dimensional dynamical systems.
Our task is to accurately estimate a range of likely values of the underlying parameters.
On a coupled 396-dimensional multiscale Lorenz 96 system, our method significantly outperforms a typical parameter estimation method.
arXiv Detail & Related papers (2022-11-03T01:59:20Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - Self-learning locally-optimal hypertuning using maximum entropy, and
comparison of machine learning approaches for estimating fatigue life in
composite materials [0.0]
We develop an ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage.
The predictions achieve a good level of accuracy, similar to other ML algorithms.
arXiv Detail & Related papers (2022-10-19T12:20:07Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.