Remaining Useful Life Estimation Under Uncertainty with Causal GraphNets
- URL: http://arxiv.org/abs/2011.11740v1
- Date: Mon, 23 Nov 2020 21:28:03 GMT
- Title: Remaining Useful Life Estimation Under Uncertainty with Causal GraphNets
- Authors: Charilaos Mylonas and Eleni Chatzi
- Abstract summary: A novel approach for the construction and training of time series models is presented.
The proposed method is appropriate for constructing predictive models for non-stationary time series.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, a novel approach for the construction and training of time
series models is presented that deals with the problem of learning on large
time series with non-equispaced observations, which at the same time may
possess features of interest that span multiple scales. The proposed method is
appropriate for constructing predictive models for non-stationary stochastic
time series.The efficacy of the method is demonstrated on a simulated
stochastic degradation dataset and on a real-world accelerated life testing
dataset for ball-bearings. The proposed method, which is based on GraphNets,
implicitly learns a model that describes the evolution of the system at the
level of a state-vector rather than of a raw observation. The proposed approach
is compared to a recurrent network with a temporal convolutional feature
extractor head (RNN-tCNN) which forms a known viable alternative for the
problem context considered. Finally, by taking advantage of recent advances in
the computation of reparametrization gradients for learning probability
distributions, a simple yet effective technique for representing prediction
uncertainty as a Gamma distribution over remaining useful life predictions is
employed.
Related papers
- Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation
for Time Series [49.992908221544624]
Time series data often exhibit numerous missing values, which is the time series imputation task.
Previous deep learning methods have been shown to be effective for time series imputation.
We propose a non-generative time series imputation method that produces accurate imputations with inherent uncertainty.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - Non-parametric Probabilistic Time Series Forecasting via Innovations
Representation [29.255644836978956]
Probabilistic time series forecasting predicts the conditional probability distributions of the time series at a future time given past realizations.
Existing approaches are primarily based on parametric or semi-parametric time-series models that are restrictive, difficult to validate, and challenging to adapt to varying conditions.
This paper proposes a nonparametric method based on the classic notion of em innovations pioneered by Norbert Wiener and Gopinath Kallianpur.
arXiv Detail & Related papers (2023-06-05T02:24:59Z) - Fast Estimation of Bayesian State Space Models Using Amortized
Simulation-Based Inference [0.0]
This paper presents a fast algorithm for estimating hidden states of Bayesian state space models.
After pretraining, finding the posterior distribution for any dataset takes from hundredths to tenths of a second.
arXiv Detail & Related papers (2022-10-13T16:37:05Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Probabilistic Forecasting with Generative Networks via Scoring Rule
Minimization [5.5643498845134545]
We use generative neural networks to parametrize distributions on high-dimensional spaces by transforming draws from a latent variable.
We train generative networks to minimize a predictive-sequential (or prequential) scoring rule on a recorded temporal sequence of the phenomenon of interest.
Our method outperforms state-of-the-art adversarial approaches, especially in probabilistic calibration.
arXiv Detail & Related papers (2021-12-15T15:51:12Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Probabilistic Time Series Forecasting with Implicit Quantile Networks [0.7249731529275341]
We combine an autoregressive recurrent neural network to model temporal dynamics with Implicit Quantile Networks to learn a large class of distributions over a time-series target.
Our approach is favorable in terms of point-wise prediction accuracy as well as on estimating the underlying temporal distribution.
arXiv Detail & Related papers (2021-07-08T10:37:24Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - RNN with Particle Flow for Probabilistic Spatio-temporal Forecasting [30.277213545837924]
Many classical statistical models often fall short in handling the complexity and high non-linearity present in time-series data.
In this work, we consider the time-series data as a random realization from a nonlinear state-space model.
We use particle flow as the tool for approximating the posterior distribution of the states, as it is shown to be highly effective in complex, high-dimensional settings.
arXiv Detail & Related papers (2021-06-10T21:49:23Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.