Robustness Verification of Deep Neural Networks using Star-Based
Reachability Analysis with Variable-Length Time Series Input
- URL: http://arxiv.org/abs/2307.13907v1
- Date: Wed, 26 Jul 2023 02:15:11 GMT
- Title: Robustness Verification of Deep Neural Networks using Star-Based
Reachability Analysis with Variable-Length Time Series Input
- Authors: Neelanjana Pal, Diego Manzanas Lopez, and Taylor T Johnson
- Abstract summary: This paper presents a case study of the robustness verification approach for time series regression NNs (TSRegNN) using set-based formal methods.
It focuses on utilizing variable-length input data to streamline input manipulation and enhance network architecture generalizability.
Overall, the paper offers a comprehensive case study for validating and verifying NN-based analytics of time-series data in real-world applications.
- Score: 6.146046338698173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven, neural network (NN) based anomaly detection and predictive
maintenance are emerging research areas. NN-based analytics of time-series data
offer valuable insights into past behaviors and estimates of critical
parameters like remaining useful life (RUL) of equipment and state-of-charge
(SOC) of batteries. However, input time series data can be exposed to
intentional or unintentional noise when passing through sensors, necessitating
robust validation and verification of these NNs. This paper presents a case
study of the robustness verification approach for time series regression NNs
(TSRegNN) using set-based formal methods. It focuses on utilizing
variable-length input data to streamline input manipulation and enhance network
architecture generalizability. The method is applied to two data sets in the
Prognostics and Health Management (PHM) application areas: (1) SOC estimation
of a Lithium-ion battery and (2) RUL estimation of a turbine engine. The NNs'
robustness is checked using star-based reachability analysis, and several
performance measures evaluate the effect of bounded perturbations in the input
on network outputs, i.e., future outcomes. Overall, the paper offers a
comprehensive case study for validating and verifying NN-based analytics of
time-series data in real-world applications, emphasizing the importance of
robustness testing for accurate and reliable predictions, especially
considering the impact of noise on future outcomes.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - Active Learning with Fully Bayesian Neural Networks for Discontinuous and Nonstationary Data [0.0]
We introduce fully Bayesian Neural Networks (FBNNs) for active learning tasks in the'small data' regime.
FBNNs provide reliable predictive distributions, crucial for making informed decisions under uncertainty in the active learning setting.
Here, we assess the suitability and performance of FBNNs with the No-U-Turn Sampler for active learning tasks in the'small data' regime.
arXiv Detail & Related papers (2024-05-16T05:20:47Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - An LSTM-Based Predictive Monitoring Method for Data with Time-varying
Variability [3.5246670856011035]
This paper explores the ability of the recurrent neural network structure to monitor processes.
It proposes a control chart based on long short-term memory (LSTM) prediction intervals for data with time-varying variability.
The proposed method is also applied to time series sensor data, which confirms that the proposed method is an effective technique for detecting abnormalities.
arXiv Detail & Related papers (2023-09-05T06:13:09Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Knowing When to Stop: Delay-Adaptive Spiking Neural Network Classifiers with Reliability Guarantees [36.14499894307206]
Spiking neural networks (SNNs) process time-series data via internal event-driven neural dynamics.
We introduce a novel delay-adaptive SNN-based inference methodology that provides guaranteed reliability for the decisions produced at input-dependent stopping times.
arXiv Detail & Related papers (2023-05-18T22:11:04Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.