Attack Pattern Mining to Discover Hidden Threats to Industrial Control Systems
- URL: http://arxiv.org/abs/2508.04561v1
- Date: Wed, 06 Aug 2025 15:47:19 GMT
- Title: Attack Pattern Mining to Discover Hidden Threats to Industrial Control Systems
- Authors: Muhammad Azmi Umer, Chuadhry Mujeeb Ahmed, Aditya Mathur, Muhammad Taha Jilani,
- Abstract summary: This work focuses on validation of attack pattern mining in the context of Industrial Control System (ICS) security.<n>We have proposed a data driven technique to generate attack patterns for an ICS.<n>The proposed technique has been used to generate over 100,000 attack patterns from data gathered from an operational water treatment plant.
- Score: 5.244448645685142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work focuses on validation of attack pattern mining in the context of Industrial Control System (ICS) security. A comprehensive security assessment of an ICS requires generating a large and variety of attack patterns. For this purpose we have proposed a data driven technique to generate attack patterns for an ICS. The proposed technique has been used to generate over 100,000 attack patterns from data gathered from an operational water treatment plant. In this work we present a detailed case study to validate the attack patterns.
Related papers
- A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - Benchmarking Misuse Mitigation Against Covert Adversaries [80.74502950627736]
Existing language model safety evaluations focus on overt attacks and low-stakes tasks.<n>We develop Benchmarks for Stateful Defenses (BSD), a data generation pipeline that automates evaluations of covert attacks and corresponding defenses.<n>Our evaluations indicate that decomposition attacks are effective misuse enablers, and highlight stateful defenses as a countermeasure.
arXiv Detail & Related papers (2025-06-06T17:33:33Z) - Adversarial Sample Generation for Anomaly Detection in Industrial Control Systems [2.6513941799808873]
We generate adversarial samples using the Jacobian Saliency Map Attack (JSMA)<n>We validate the generalization and scalability of the adversarial samples to tackle a broad range of real attacks on Industrial Control Systems.<n>The model trained with adversarial samples detected attacks with 95% accuracy on real-world attack data not used during training.
arXiv Detail & Related papers (2025-05-06T02:27:17Z) - AttackLLM: LLM-based Attack Pattern Generation for an Industrial Control System [3.0380814092788984]
Malicious examples are crucial for evaluating the robustness of machine learning algorithms under attack.<n>Existing datasets are often limited by the domain expertise of practitioners.<n>We propose a novel approach that combines data-centric and design-centric methodologies to generate attack patterns.
arXiv Detail & Related papers (2025-04-05T14:11:47Z) - Learning diverse attacks on large language models for robust red-teaming and safety tuning [126.32539952157083]
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe deployment of large language models.<n>We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks.<n>We propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate diverse and effective attack prompts.
arXiv Detail & Related papers (2024-05-28T19:16:17Z) - usfAD Based Effective Unknown Attack Detection Focused IDS Framework [3.560574387648533]
Internet of Things (IoT) and Industrial Internet of Things (IIoT) have led to an increasing range of cyber threats.
For more than a decade, researchers have delved into supervised machine learning techniques to develop Intrusion Detection System (IDS)
IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks.
We propose two strategies for semi-supervised learning based IDS where training samples of attacks are not required.
arXiv Detail & Related papers (2024-03-17T11:49:57Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - A Comparative Study of Watering Hole Attack Detection Using Supervised Neural Network [0.0]
This study explores the nefarious tactic known as "watering hole attacks using supervised neural networks to detect and prevent these attacks.
The neural network identifies patterns in website behavior and network traffic associated with such attacks.
In terms of prevention, the model successfully stops 95% of attacks, providing robust user protection.
arXiv Detail & Related papers (2023-11-25T13:30:03Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving [18.72382517467458]
We propose a novel adversarial backdoor attack against trajectory prediction models.
Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach.
We show that the proposed attack is highly effective, as it can significantly hinder the performance of prediction models.
arXiv Detail & Related papers (2023-06-27T19:15:06Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Attack Rules: An Adversarial Approach to Generate Attacks for Industrial
Control Systems using Machine Learning [7.205662414865643]
We propose an association rule mining-based attack generation technique.
The proposed technique was able to generate more than 300,000 attack patterns constituting a vast majority of new attack vectors which were not seen before.
arXiv Detail & Related papers (2021-07-11T20:20:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.