TFDPM: Attack detection for cyber-physical systems with diffusion
probabilistic models
- URL: http://arxiv.org/abs/2112.10774v1
- Date: Mon, 20 Dec 2021 13:13:29 GMT
- Title: TFDPM: Attack detection for cyber-physical systems with diffusion
probabilistic models
- Authors: Tijin Yan, Tong Zhou, Yufeng Zhan, Yuanqing Xia
- Abstract summary: We propose TFDPM, a general framework for attack detection tasks in CPSs.
It simultaneously extracts temporal pattern and feature pattern given the historical data.
The noise scheduling network increases the detection speed by three times.
- Score: 10.389972581904999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of AIoT, data-driven attack detection methods for
cyber-physical systems (CPSs) have attracted lots of attention. However,
existing methods usually adopt tractable distributions to approximate data
distributions, which are not suitable for complex systems. Besides, the
correlation of the data in different channels does not attract sufficient
attention. To address these issues, we use energy-based generative models,
which are less restrictive on functional forms of the data distribution. In
addition, graph neural networks are used to explicitly model the correlation of
the data in different channels. In the end, we propose TFDPM, a general
framework for attack detection tasks in CPSs. It simultaneously extracts
temporal pattern and feature pattern given the historical data. Then extract
features are sent to a conditional diffusion probabilistic model. Predicted
values can be obtained with the conditional generative network and attacks are
detected based on the difference between predicted values and observed values.
In addition, to realize real-time detection, a conditional noise scheduling
network is proposed to accelerate the prediction process. Experimental results
show that TFDPM outperforms existing state-of-the-art attack detection methods.
The noise scheduling network increases the detection speed by three times.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Window-Based Distribution Shift Detection for Deep Neural Networks [21.73028341299301]
We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
arXiv Detail & Related papers (2022-10-19T21:27:25Z) - DEGAN: Time Series Anomaly Detection using Generative Adversarial
Network Discriminators and Density Estimation [0.0]
We have proposed an unsupervised Generative Adversarial Network (GAN)-based anomaly detection framework, DEGAN.
It relies solely on normal time series data as input to train a well-configured discriminator (D) into a standalone anomaly predictor.
arXiv Detail & Related papers (2022-10-05T04:32:12Z) - A Novel Explainable Out-of-Distribution Detection Approach for Spiking
Neural Networks [6.100274095771616]
This work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained.
We characterize the internal activations of the hidden layers of the network in the form of spike count patterns.
A local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample.
arXiv Detail & Related papers (2022-09-30T11:16:35Z) - CODiT: Conformal Out-of-Distribution Detection in Time-Series Data [11.565104282674973]
In many applications, the inputs to a machine learning model form a temporal sequence.
We propose using deviation from the in-distribution temporal equivariance as the non-conformity measure in conformal anomaly detection framework.
We illustrate the efficacy of CODiT by achieving state-of-the-art results on computer vision datasets in autonomous driving.
arXiv Detail & Related papers (2022-07-24T16:41:14Z) - Stacked Residuals of Dynamic Layers for Time Series Anomaly Detection [0.0]
We present an end-to-end differentiable neural network architecture to perform anomaly detection in multivariate time series.
The architecture is a cascade of dynamical systems designed to separate linearly predictable components of the signal.
The anomaly detector exploits the temporal structure of the prediction residuals to detect both isolated point anomalies and set-point changes.
arXiv Detail & Related papers (2022-02-25T01:50:22Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - CSDI: Conditional Score-based Diffusion Models for Probabilistic Time
Series Imputation [107.63407690972139]
Conditional Score-based Diffusion models for Imputation (CSDI) is a novel time series imputation method that utilizes score-based diffusion models conditioned on observed data.
CSDI improves by 40-70% over existing probabilistic imputation methods on popular performance metrics.
In addition, C reduces the error by 5-20% compared to the state-of-the-art deterministic imputation methods.
arXiv Detail & Related papers (2021-07-07T22:20:24Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.