An Anomaly Detection Method for Satellites Using Monte Carlo Dropout
- URL: http://arxiv.org/abs/2211.14938v1
- Date: Sun, 27 Nov 2022 21:12:26 GMT
- Title: An Anomaly Detection Method for Satellites Using Monte Carlo Dropout
- Authors: Mohammad Amin Maleki Sadr, Yeying Zhu, Peng Hu
- Abstract summary: We present a tractable approximation for BNN based on the Monte Carlo (MC) dropout method for capturing the uncertainty in the satellite telemetry time series.
Our proposed time series AD approach outperforms the existing methods from both prediction accuracy and AD perspectives.
- Score: 7.848121055546167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, there has been a significant amount of interest in satellite
telemetry anomaly detection (AD) using neural networks (NN). For AD purposes,
the current approaches focus on either forecasting or reconstruction of the
time series, and they cannot measure the level of reliability or the
probability of correct detection. Although the Bayesian neural network
(BNN)-based approaches are well known for time series uncertainty estimation,
they are computationally intractable. In this paper, we present a tractable
approximation for BNN based on the Monte Carlo (MC) dropout method for
capturing the uncertainty in the satellite telemetry time series, without
sacrificing accuracy. For time series forecasting, we employ an NN, which
consists of several Long Short-Term Memory (LSTM) layers followed by various
dense layers. We employ the MC dropout inside each LSTM layer and before the
dense layers for uncertainty estimation. With the proposed uncertainty region
and by utilizing a post-processing filter, we can effectively capture the
anomaly points. Numerical results show that our proposed time series AD
approach outperforms the existing methods from both prediction accuracy and AD
perspectives.
Related papers
- Single-shot Bayesian approximation for neural networks [0.0]
Deep neural networks (NNs) are known for their high-prediction performances.
NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty.
We present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs.
arXiv Detail & Related papers (2023-08-24T13:40:36Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Rapid Risk Minimization with Bayesian Models Through Deep Learning
Approximation [9.93116974480156]
We introduce a novel combination of Bayesian Models (BMs) and Neural Networks (NNs) for making predictions with a minimum expected risk.
Our approach combines the data efficiency and interpretability of a BM with the speed of a NN.
We achieve risk minimized predictions significantly faster than standard methods with a negligible loss on the testing dataset.
arXiv Detail & Related papers (2021-03-29T15:08:25Z) - Uncertainty Intervals for Graph-based Spatio-Temporal Traffic Prediction [0.0]
We propose a Spatio-Temporal neural network that is trained to estimate a density given the measurements of previous timesteps, conditioned on a quantile.
Our method of density estimation is fully parameterised by our neural network and does not use a likelihood approximation internally.
This approach produces uncertainty estimates without the need to sample during inference, such as in Monte Carlo Dropout.
arXiv Detail & Related papers (2020-12-09T18:02:26Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Probabilistic Neighbourhood Component Analysis: Sample Efficient
Uncertainty Estimation in Deep Learning [25.8227937350516]
We show that uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
We propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach.
Our approach enables deep kNN to accurately quantify underlying uncertainties in its prediction.
arXiv Detail & Related papers (2020-07-18T21:36:31Z) - Single Shot MC Dropout Approximation [0.0]
We present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN.
Our approach is analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal.
We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution.
arXiv Detail & Related papers (2020-07-07T09:17:17Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.