Impact of Data Poisoning Attacks on Feasibility and Optimality of Neural Power System Optimizers
- URL: http://arxiv.org/abs/2502.05727v1
- Date: Sun, 09 Feb 2025 00:06:10 GMT
- Title: Impact of Data Poisoning Attacks on Feasibility and Optimality of Neural Power System Optimizers
- Authors: Nora Agah, Meiyi Li, Javad Mohammadi,
- Abstract summary: In this paper, we examine the impact of data poisoning attacks on ML-based optimization proxies that are used to solve the DC Optimal Power Flow problem.
Specifically, we compare the resilience of three different methods-a penalty-based method, a post-repair approach, and a direct mapping approach-against the adverse effects of poisoning attacks.
- Score: 0.0
- License:
- Abstract: The increased integration of clean yet stochastic energy resources and the growing number of extreme weather events are narrowing the decision-making window of power grid operators. This time constraint is fueling a plethora of research on Machine Learning-, or ML-, based optimization proxies. While finding a fast solution is appealing, the inherent vulnerabilities of the learning-based methods are hindering their adoption. One of these vulnerabilities is data poisoning attacks, which adds perturbations to ML training data, leading to incorrect decisions. The impact of poisoning attacks on learning-based power system optimizers have not been thoroughly studied, which creates a critical vulnerability. In this paper, we examine the impact of data poisoning attacks on ML-based optimization proxies that are used to solve the DC Optimal Power Flow problem. Specifically, we compare the resilience of three different methods-a penalty-based method, a post-repair approach, and a direct mapping approach-against the adverse effects of poisoning attacks. We will use the optimality and feasibility of these proxies as performance metrics. The insights of this work will establish a foundation for enhancing the resilience of neural power system optimizers.
Related papers
- RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent [11.22833419439317]
Empirical robustness risk (ERM) is a cornerstone of modern machine learning (ML)
This paper focuses on the man-in-the-middle (MITM) attack, which can cause models to deviate significantly from their intended ERM solutions.
We propose RESIST, an algorithm designed to be robust against adversarially compromised communication links.
arXiv Detail & Related papers (2025-02-11T21:48:10Z) - Confident or Seek Stronger: Exploring Uncertainty-Based On-device LLM Routing From Benchmarking to Generalization [61.02719787737867]
Large language models (LLMs) are increasingly deployed and democratized on edge devices.
One promising solution is uncertainty-based SLM routing, offloading high-stakes queries to stronger LLMs when resulting in low-confidence responses on SLM.
We conduct a comprehensive investigation into benchmarking and generalization of uncertainty-driven routing strategies from SLMs to LLMs over 1500+ settings.
arXiv Detail & Related papers (2025-02-06T18:59:11Z) - Privacy-Preserving Distributed Learning for Residential Short-Term Load
Forecasting [11.185176107646956]
Power system load data can inadvertently reveal the daily routines of residential users, posing a risk to their property security.
We introduce a Markovian Switching-based distributed training framework, the convergence of which is substantiated through rigorous theoretical analysis.
Case studies employing real-world power system load data validate the efficacy of our proposed algorithm.
arXiv Detail & Related papers (2024-02-02T16:39:08Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Enhancing Adversarial Robustness via Score-Based Optimization [22.87882885963586]
Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations.
We introduce a novel adversarial defense scheme named ScoreOpt, which optimize adversarial samples at test-time.
Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both performance and robustness speed.
arXiv Detail & Related papers (2023-07-10T03:59:42Z) - Hyperparameter Learning under Data Poisoning: Analysis of the Influence
of Regularization via Multiobjective Bilevel Optimization [3.3181276611945263]
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance.
Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios.
arXiv Detail & Related papers (2023-06-02T15:21:05Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
Multiobjective Bilevel Optimisation [3.3181276611945263]
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance.
Optimal poisoning attacks, which can be formulated as bilevel problems, help to assess the robustness of learning algorithms in worst-case scenarios.
We show that this approach leads to an overly pessimistic view of the robustness of the algorithms.
arXiv Detail & Related papers (2020-02-28T19:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.