Satellite Anomaly Detection Using Variance Based Genetic Ensemble of
Neural Networks
- URL: http://arxiv.org/abs/2302.05525v1
- Date: Fri, 10 Feb 2023 22:09:00 GMT
- Title: Satellite Anomaly Detection Using Variance Based Genetic Ensemble of
Neural Networks
- Authors: Mohammad Amin Maleki Sadr, Yeying Zhu, Peng Hu
- Abstract summary: We use an efficient ensemble of the predictions from multiple Recurrent Neural Networks (RNNs)
For prediction, each RNN is guided by a Genetic Algorithm (GA) which constructs the optimal structure for each RNN model.
This paper uses the Monte Carlo (MC) dropout as an approximation version of BNNs.
- Score: 7.848121055546167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we use a variance-based genetic ensemble (VGE) of Neural
Networks (NNs) to detect anomalies in the satellite's historical data. We use
an efficient ensemble of the predictions from multiple Recurrent Neural
Networks (RNNs) by leveraging each model's uncertainty level (variance). For
prediction, each RNN is guided by a Genetic Algorithm (GA) which constructs the
optimal structure for each RNN model. However, finding the model uncertainty
level is challenging in many cases. Although the Bayesian NNs (BNNs)-based
methods are popular for providing the confidence bound of the models, they
cannot be employed in complex NN structures as they are computationally
intractable. This paper uses the Monte Carlo (MC) dropout as an approximation
version of BNNs. Then these uncertainty levels and each predictive model
suggested by GA are used to generate a new model, which is then used for
forecasting the TS and AD. Simulation results show that the forecasting and AD
capability of the ensemble model outperforms existing approaches.
Related papers
- Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Quantifying uncertainty for deep learning based forecasting and
flow-reconstruction using neural architecture search ensembles [0.8258451067861933]
We present an automated approach to deep neural network (DNN) discovery and demonstrate how this may also be utilized for ensemble-based uncertainty quantification.
We highlight how the proposed method not only discovers high-performing neural network ensembles for our tasks, but also quantifies uncertainty seamlessly.
We demonstrate the feasibility of this framework for two tasks - forecasting from historical data and flow reconstruction from sparse sensors for the sea-surface temperature.
arXiv Detail & Related papers (2023-02-20T03:57:06Z) - Neural Additive Models for Location Scale and Shape: A Framework for
Interpretable Neural Regression Beyond the Mean [1.0923877073891446]
Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks.
Despite this success, the inner workings of DNNs are often not transparent.
This lack of interpretability has led to increased research on inherently interpretable neural networks.
arXiv Detail & Related papers (2023-01-27T17:06:13Z) - Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks [0.0]
Multiplicative normalizing flows (MNFs) are a family of approximate posteriors for the parameters of BNNs.
We have compared MNFs with respect to the standard BNNs, and the flipout estimator.
MNFs provide more realistic predictive distribution closer to the true posterior mitigating the bias introduced by the variational approximation.
arXiv Detail & Related papers (2023-01-09T16:07:48Z) - Multilevel Bayesian Deep Neural Networks [0.5892638927736115]
We consider inference associated with deep neural networks (DNNs) and in particular, trace-class neural network (TNN) priors.
TNN priors are defined on functions with infinitely many hidden units, and have strongly convergent approximations with finitely many hidden units.
In this paper, we leverage the strong convergence of TNN in order to apply Multilevel Monte Carlo (MLMC) to these models.
arXiv Detail & Related papers (2022-03-24T09:49:27Z) - Spatio-Temporal Neural Network for Fitting and Forecasting COVID-19 [1.1129587851149594]
We established a Spatio-Temporal Neural Network, namely STNN, to forecast the spread of the coronavirus COVID-19 outbreak worldwide in 2020.
Two improved STNN architectures, namely the STNN with Augmented Spatial States (STNN-A) and the STNN with Input Gate (STNN-I), are proposed.
Numerical simulations demonstrate that STNN models outperform many others by providing more accurate fitting and prediction, and by handling both spatial and temporal data.
arXiv Detail & Related papers (2021-03-22T13:59:14Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.