Model Predictive Control with adaptive resilience for Denial-of-Service Attacks mitigation on a Regulated Dam
- URL: http://arxiv.org/abs/2402.18516v1
- Date: Wed, 28 Feb 2024 17:47:27 GMT
- Title: Model Predictive Control with adaptive resilience for Denial-of-Service Attacks mitigation on a Regulated Dam
- Authors: Raffaele Giuseppe Cestari, Stefano Longari, Stefano Zanero, Simone Formentin,
- Abstract summary: SCADA (Supervisory Control and Data Acquisition) systems have increasingly become the target of cyber attacks.
In a cyber-warfare context, we propose a Model Predictive Control architecture with adaptive resilience.
We demonstrate the resulting MPC strategy's effectiveness in 2 attack scenarios on a real system with actual data.
- Score: 5.32980262772932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, SCADA (Supervisory Control and Data Acquisition) systems have increasingly become the target of cyber attacks. SCADAs are no longer isolated, as web-based applications expose strategic infrastructures to the outside world connection. In a cyber-warfare context, we propose a Model Predictive Control (MPC) architecture with adaptive resilience, capable of guaranteeing control performance in normal operating conditions and driving towards resilience against DoS (controller-actuator) attacks when needed. Since the attackers' goal is typically to maximize the system damage, we assume they solve an adversarial optimal control problem. An adaptive resilience factor is then designed as a function of the intensity function of a Hawkes process, a point process model estimating the occurrence of random events in time, trained on a moving window to estimate the return time of the next attack. We demonstrate the resulting MPC strategy's effectiveness in 2 attack scenarios on a real system with actual data, the regulated Olginate dam of Lake Como.
Related papers
- Defense against Joint Poison and Evasion Attacks: A Case Study of DERMS [2.632261166782093]
We propose the first framework of IDS that is robust against joint poisoning and evasion attacks.
We verify the robustness of our method on the IEEE-13 bus feeder model against a diverse set of poisoning and evasion attack scenarios.
arXiv Detail & Related papers (2024-05-05T16:24:30Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Adversarial Markov Games: On Adaptive Decision-Based Attacks and Defenses [21.759075171536388]
We show how attacks but also defenses can benefit by it and by learning from each other through interaction.
We demonstrate that active defenses, which control how the system responds, are a necessary complement to model hardening when facing decision-based attacks.
We lay out effective strategies in ensuring the robustness of ML-based systems deployed in the real-world.
arXiv Detail & Related papers (2023-12-20T21:24:52Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics [8.697883716452385]
We propose a double defense mechanism to detect and mitigate adversarial attacks in I-IoT environments.
We first detect if there is an adversarial attack on a given sample using novelty detection algorithms.
If there is an attack, adversarial retraining provides a more robust model, while we apply standard training for regular samples.
arXiv Detail & Related papers (2023-01-23T22:10:40Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - An RL-Based Adaptive Detection Strategy to Secure Cyber-Physical Systems [0.0]
Increased dependence on software based control has escalated the vulnerabilities of Cyber Physical Systems.
We propose a Reinforcement Learning (RL) based framework which adaptively sets the parameters of such detectors based on experience learned from attack scenarios.
arXiv Detail & Related papers (2021-03-04T07:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.