Rail Crack Propagation Forecasting Using Multi-horizons RNNs
- URL: http://arxiv.org/abs/2309.01569v1
- Date: Mon, 4 Sep 2023 12:44:21 GMT
- Title: Rail Crack Propagation Forecasting Using Multi-horizons RNNs
- Authors: Sara Yasmine Ouerk, Olivier Vo Van, Mouadh Yagoubi
- Abstract summary: The prediction of rail crack length propagation plays a crucial role in the maintenance and safety assessment of materials and structures.
Traditional methods rely on physical models and empirical equations such as Paris law.
In recent years, machine learning techniques, particularly Recurrent Neural Networks (RNNs), have emerged as promising methods for time series forecasting.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The prediction of rail crack length propagation plays a crucial role in the
maintenance and safety assessment of materials and structures. Traditional
methods rely on physical models and empirical equations such as Paris law,
which often have limitations in capturing the complex nature of crack growth.
In recent years, machine learning techniques, particularly Recurrent Neural
Networks (RNNs), have emerged as promising methods for time series forecasting.
They allow to model time series data, and to incorporate exogenous variables
into the model. The proposed approach involves collecting real data on the
French rail network that includes historical crack length measurements, along
with relevant exogenous factors that may influence crack growth. First, a
pre-processing phase was performed to prepare a consistent data set for
learning. Then, a suitable Bayesian multi-horizons recurrent architecture was
designed to model the crack propagation phenomenon. Obtained results show that
the Multi-horizons model outperforms state-of-the-art models such as LSTM and
GRU.
Related papers
- Constrained Recurrent Bayesian Forecasting for Crack Propagation [0.40964539027092917]
This paper introduces a robust Bayesian multi-horizon approach for predicting the temporal evolution of crack lengths on rails.
To enhance the model's reliability for railroad maintenance, specific constraints are incorporated.
The findings reveal a trade-off between prediction accuracy and constraint compliance, highlighting the nuanced decision-making process in model training.
arXiv Detail & Related papers (2024-10-18T13:15:53Z) - TransformerLSR: Attentive Joint Model of Longitudinal Data, Survival, and Recurrent Events with Concurrent Latent Structure [35.54001561725239]
We develop TransformerLSR, a flexible transformer-based deep modeling and inference framework to jointly model all three components simultaneously.
We demonstrate the effectiveness and necessity of TransformerLSR through simulation studies and analyzing a real-world medical dataset on patients after kidney transplantation.
arXiv Detail & Related papers (2024-04-04T20:51:37Z) - Generative Pretrained Hierarchical Transformer for Time Series Forecasting [3.739587363053192]
We propose a novel generative pretrained hierarchical transformer architecture for forecasting, named textbfGPHT.
We conduct sufficient experiments on eight datasets with mainstream self-supervised pretraining models and supervised models.
The results demonstrated that GPHT surpasses the baseline models across various fine-tuning and zero/few-shot learning settings in the traditional long-term forecasting task.
arXiv Detail & Related papers (2024-02-26T11:54:54Z) - Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective [63.60312929416228]
textbftextitAttraos incorporates chaos theory into long-term time series forecasting.
We show that Attraos outperforms various LTSF methods on mainstream datasets and chaotic datasets with only one-twelfth of the parameters compared to PatchTST.
arXiv Detail & Related papers (2024-02-18T05:35:01Z) - On the Resurgence of Recurrent Models for Long Sequences -- Survey and
Research Opportunities in the Transformer Era [59.279784235147254]
This survey is aimed at providing an overview of these trends framed under the unifying umbrella of Recurrence.
It emphasizes novel research opportunities that become prominent when abandoning the idea of processing long sequences.
arXiv Detail & Related papers (2024-02-12T23:55:55Z) - The Capacity and Robustness Trade-off: Revisiting the Channel
Independent Strategy for Multivariate Time Series Forecasting [50.48888534815361]
We show that models trained with the Channel Independent (CI) strategy outperform those trained with the Channel Dependent (CD) strategy.
Our results conclude that the CD approach has higher capacity but often lacks robustness to accurately predict distributionally drifted time series.
We propose a modified CD method called Predict Residuals with Regularization (PRReg) that can surpass the CI strategy.
arXiv Detail & Related papers (2023-04-11T13:15:33Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Contextually Enhanced ES-dRNN with Dynamic Attention for Short-Term Load
Forecasting [1.1602089225841632]
The proposed model is composed of two simultaneously trained tracks: the context track and the main track.
The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive recurrent cells.
The model produces both point forecasts and predictive intervals.
arXiv Detail & Related papers (2022-12-18T07:42:48Z) - Global Models for Time Series Forecasting: A Simulation Study [2.580765958706854]
We simulate time series from simple data generating processes (DGP), such as Auto Regressive (AR) and Seasonal AR, to complex DGPs, such as Chaotic Logistic Map, Self-Exciting Threshold Auto-Regressive, and Mackey-Glass equations.
The lengths and the number of series in the dataset are varied in different scenarios.
We perform experiments on these datasets using global forecasting models including Recurrent Neural Networks (RNN), Feed-Forward Neural Networks, Pooled Regression (PR) models, and Light Gradient Boosting Models (LGBM)
arXiv Detail & Related papers (2020-12-23T04:45:52Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.