Budgeted Adversarial Attack against Graph-Based Anomaly Detection in Sensor Networks
- URL: http://arxiv.org/abs/2509.17987v1
- Date: Mon, 22 Sep 2025 16:30:19 GMT
- Title: Budgeted Adversarial Attack against Graph-Based Anomaly Detection in Sensor Networks
- Authors: Sanju Xaviar, Omid Ardakanian,
- Abstract summary: Graph Neural Networks (GNNs) have emerged as powerful models for anomaly detection in sensor networks.<n>We introduce BETA, a novel grey-box evasion attack targeting such GNN-based detectors.<n>We show that BETA reduces the detection accuracy of state-of-the-art GNN-based detectors by 30.62 to 39.16% on average.
- Score: 1.2891210250935148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as powerful models for anomaly detection in sensor networks, particularly when analyzing multivariate time series. In this work, we introduce BETA, a novel grey-box evasion attack targeting such GNN-based detectors, where the attacker is constrained to perturb sensor readings from a limited set of nodes, excluding the target sensor, with the goal of either suppressing a true anomaly or triggering a false alarm at the target node. BETA identifies the sensors most influential to the target node's classification and injects carefully crafted adversarial perturbations into their features, all while maintaining stealth and respecting the attacker's budget. Experiments on three real-world sensor network datasets show that BETA reduces the detection accuracy of state-of-the-art GNN-based detectors by 30.62 to 39.16% on average, and significantly outperforms baseline attack strategies, while operating within realistic constraints.
Related papers
- Online Reliable Anomaly Detection via Neuromorphic Sensing and Communications [58.796149594878585]
This paper proposes a low-power online anomaly detection framework based on neuromorphic wireless sensor networks.<n>In the considered system, a central reader node actively queries a subset of neuromorphic sensor nodes (neuro-SNs) at each time frame.<n>The neuromorphic sensors are event-driven, producing spikes in correspondence to relevant changes in the monitored system.
arXiv Detail & Related papers (2025-10-16T13:56:54Z) - Adversarial Attention Perturbations for Large Object Detection Transformers [6.845910470068847]
Adversarial perturbations are useful tools for exposing vulnerabilities in neural networks.<n>This paper presents an Attention-Focused Offensive Gradient (AFOG) attack against object detection transformers.
arXiv Detail & Related papers (2025-08-05T01:31:10Z) - Threatening Patch Attacks on Object Detection in Optical Remote Sensing
Images [55.09446477517365]
Advanced Patch Attacks (PAs) on object detection in natural images have pointed out the great safety vulnerability in methods based on deep neural networks.
We propose a more Threatening PA without the scarification of the visual quality, dubbed TPA.
To the best of our knowledge, this is the first attempt to study the PAs on object detection in O-RSIs, and we hope this work can get our readers interested in studying this topic.
arXiv Detail & Related papers (2023-02-13T02:35:49Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Sensor Fusion-based GNSS Spoofing Attack Detection Framework for
Autonomous Vehicles [4.947150829838588]
A sensor fusion-based attack detection framework is presented that consists of three concurrent strategies for an autonomous vehicle.
Data from multiple low-cost in-vehicle sensors are fused and fed into a recurrent neural network model.
We have combined k-Nearest Neighbors (k-NN) and Dynamic Time Warping (DTW) algorithms to detect turns using data from the steering angle sensor.
Our analysis reveals that the sensor fusion-based detection framework successfully detects all three types of spoofing attacks within the required computational latency threshold.
arXiv Detail & Related papers (2021-06-05T23:02:55Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of
Point Cloud-based Deep Networks [123.5839352227726]
This paper proposes a novel label guided adversarial network (LG-GAN) for real-time flexible targeted point cloud attack.
To the best of our knowledge, this is the first generation based 3D point cloud attack method.
arXiv Detail & Related papers (2020-11-01T17:17:10Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Multi-stage Jamming Attacks Detection using Deep Learning Combined with
Kernelized Support Vector Machine in 5G Cloud Radio Access Networks [17.2528983535773]
This research focuses on deploying a multi-stage machine learning-based intrusion detection (ML-IDS) in 5G C-RAN.
It can detect and classify four types of jamming attacks: constant jamming, random jamming, jamming, and reactive jamming.
The final classification accuracy of attacks is 94.51% with a 7.84% false negative rate.
arXiv Detail & Related papers (2020-04-13T17:21:45Z) - TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time
Object Detection Systems [14.976840260248913]
This paper presents three Targeted adversarial Objectness Gradient attacks to cause object-vanishing, object-fabrication, and object-mislabeling attacks.
We also present a universal objectness gradient attack to use adversarial transferability for black-box attacks.
The results demonstrate serious adversarial vulnerabilities and the compelling need for developing robust object detection systems.
arXiv Detail & Related papers (2020-04-09T01:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.