Defending Water Treatment Networks: Exploiting Spatio-temporal Effects
for Cyber Attack Detection
- URL: http://arxiv.org/abs/2008.12618v1
- Date: Wed, 26 Aug 2020 15:56:55 GMT
- Title: Defending Water Treatment Networks: Exploiting Spatio-temporal Effects
for Cyber Attack Detection
- Authors: Dongjie Wang, Pengyang Wang, Jingbo Zhou, Leilei Sun, Bowen Du, Yanjie
Fu
- Abstract summary: Water Treatment Networks (WTNs) are critical infrastructures for local communities and public health, WTNs are vulnerable to cyber attacks.
We propose a structured anomaly detection framework to defend WTNs by modeling thetemporal characteristics of cyber attacks in WTNs.
- Score: 46.67179436529369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Water Treatment Networks (WTNs) are critical infrastructures for local
communities and public health, WTNs are vulnerable to cyber attacks. Effective
detection of attacks can defend WTNs against discharging contaminated water,
denying access, destroying equipment, and causing public fear. While there are
extensive studies in WTNs attack detection, they only exploit the data
characteristics partially to detect cyber attacks. After preliminary exploring
the sensing data of WTNs, we find that integrating spatio-temporal knowledge,
representation learning, and detection algorithms can improve attack detection
accuracy. To this end, we propose a structured anomaly detection framework to
defend WTNs by modeling the spatio-temporal characteristics of cyber attacks in
WTNs. In particular, we propose a spatio-temporal representation framework
specially tailored to cyber attacks after separating the sensing data of WTNs
into a sequence of time segments. This framework has two key components. The
first component is a temporal embedding module to preserve temporal patterns
within a time segment by projecting the time segment of a sensor into a
temporal embedding vector. We then construct Spatio-Temporal Graphs (STGs),
where a node is a sensor and an attribute is the temporal embedding vector of
the sensor, to describe the state of the WTNs. The second component is a
spatial embedding module, which learns the final fused embedding of the WTNs
from STGs. In addition, we devise an improved one class-SVM model that utilizes
a new designed pairwise kernel to detect cyber attacks. The devised pairwise
kernel augments the distance between normal and attack patterns in the fused
embedding space. Finally, we conducted extensive experimental evaluations with
real-world data to demonstrate the effectiveness of our framework.
Related papers
- CNN-Based Structural Damage Detection using Time-Series Sensor Data [0.0]
This research introduces an innovative approach to structural damage detection, utilizing a new Conal Neural Network (CNN) algorithm.
Time series data are divided into two categories using the proposed neural network: undamaged and damaged.
The outcomes show that the new CNN algorithm is very accurate in spotting structural degradation in the examined structure.
arXiv Detail & Related papers (2023-11-07T11:57:33Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Adversarial Vulnerability of Temporal Feature Networks for Object
Detection [5.525433572437716]
We study whether temporal feature networks for object detection are vulnerable to universal adversarial attacks.
We evaluate attacks of two types: imperceptible noise for the whole image and locally-bound adversarial patch.
Our experiments on KITTI and nuScenes datasets demonstrate, that a model robustified via K-PGD is able to withstand the studied attacks.
arXiv Detail & Related papers (2022-08-23T07:08:54Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - A Sensor Fusion-based GNSS Spoofing Attack Detection Framework for
Autonomous Vehicles [4.947150829838588]
This paper presents a sensor fusion based Global Navigation Satellite System (GNSS) spoofing attack detection framework for autonomous vehicles.
Data from multiple low-cost in-vehicle sensors are fused and fed into a recurrent neural network model.
We have combined k-Nearest Neighbors (k-NN) and Dynamic Time Warping (DTW) algorithms to detect and classify left and right turns.
arXiv Detail & Related papers (2021-08-19T11:59:51Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Real-Time Detectors for Digital and Physical Adversarial Inputs to
Perception Systems [11.752184033538636]
Deep neural network (DNN) models have proven to be vulnerable to adversarial digital and physical attacks.
We propose a novel attack- and dataset-agnostic and real-time detector for both types of adversarial inputs to DNN-based perception systems.
In particular, the proposed detector relies on the observation that adversarial images are sensitive to certain label-invariant transformations.
arXiv Detail & Related papers (2020-02-23T00:03:57Z) - Pelican: A Deep Residual Network for Network Intrusion Detection [7.562843347215287]
We propose a deep neural network, Pelican, that is built upon specially-designed residual blocks.
Pelican can achieve a high attack detection performance while keeping a much low false alarm rate.
arXiv Detail & Related papers (2020-01-19T05:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.