Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine
Learning
- URL: http://arxiv.org/abs/2006.04667v1
- Date: Mon, 8 Jun 2020 15:14:13 GMT
- Title: Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine
Learning
- Authors: Brecht Laperre, Jorge Amaya, Giovanni Lapenta
- Abstract summary: We train a neural network to make a forecast of the disturbance storm time index at origin time $t$ with a forecasting horizon of 1 up to 6 hours.
Inspection of the model's results with the correlation coefficient and RMSE indicated a performance comparable to the latest publications.
A new method is proposed to measure whether two time series are shifted in time with respect to each other.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Models based on neural networks and machine learning are seeing a rise in
popularity in space physics. In particular, the forecasting of geomagnetic
indices with neural network models is becoming a popular field of study. These
models are evaluated with metrics such as the root-mean-square error (RMSE) and
Pearson correlation coefficient. However, these classical metrics sometimes
fail to capture crucial behavior. To show where the classical metrics are
lacking, we trained a neural network, using a long short-term memory network,
to make a forecast of the disturbance storm time index at origin time $t$ with
a forecasting horizon of 1 up to 6 hours, trained on OMNIWeb data. Inspection
of the model's results with the correlation coefficient and RMSE indicated a
performance comparable to the latest publications. However, visual inspection
showed that the predictions made by the neural network were behaving similarly
to the persistence model. In this work, a new method is proposed to measure
whether two time series are shifted in time with respect to each other, such as
the persistence model output versus the observation. The new measure, based on
Dynamical Time Warping, is capable of identifying results made by the
persistence model and shows promising results in confirming the visual
observations of the neural network's output. Finally, different methodologies
for training the neural network are explored in order to remove the persistence
behavior from the results.
Related papers
- Hybridization of Persistent Homology with Neural Networks for Time-Series Prediction: A Case Study in Wave Height [0.0]
We introduce a feature engineering method that enhances the predictive performance of neural network models.
Specifically, we leverage computational topology techniques to derive valuable topological features from input data.
For time-ahead predictions, the enhancements in $R2$ score were significant for FNNs, RNNs, LSTM, and GRU models.
arXiv Detail & Related papers (2024-09-03T01:26:21Z) - Enhanced Spatiotemporal Prediction Using Physical-guided And Frequency-enhanced Recurrent Neural Networks [17.91230192726962]
This paper proposes a physical-guided neural network to estimate the Stemporal dynamics.
We also propose an adaptive second-order Runge-Kutta method with physical constraints to model the physical states more precisely.
Our model outperforms state-of-the-art methods and performs best in datasets, with a much smaller parameter count.
arXiv Detail & Related papers (2024-05-23T12:39:49Z) - An LSTM-Based Predictive Monitoring Method for Data with Time-varying
Variability [3.5246670856011035]
This paper explores the ability of the recurrent neural network structure to monitor processes.
It proposes a control chart based on long short-term memory (LSTM) prediction intervals for data with time-varying variability.
The proposed method is also applied to time series sensor data, which confirms that the proposed method is an effective technique for detecting abnormalities.
arXiv Detail & Related papers (2023-09-05T06:13:09Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Surprisal-Triggered Conditional Computation with Neural Networks [19.55737970532817]
Autoregressive neural network models have been used successfully for sequence generation, feature extraction, and hypothesis scoring.
This paper presents yet another use for these models: allocating more computation to more difficult inputs.
arXiv Detail & Related papers (2020-06-02T14:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.