Residual-Evasive Attacks on ADMM in Distributed Optimization
- URL: http://arxiv.org/abs/2504.18570v1
- Date: Tue, 22 Apr 2025 09:12:27 GMT
- Title: Residual-Evasive Attacks on ADMM in Distributed Optimization
- Authors: Sabrina Bruckmeier, Huadong Mo, James Qin,
- Abstract summary: This paper presents two attack strategies designed to evade detection in ADMM-based systems.<n>We show that our attacks remain undetected by keeping the residual largely unchanged.<n>A comparison of the two strategies, along with commonly used naive attacks, reveals trade-offs between simplicity, detectability, and effectiveness.
- Score: 2.999222219373899
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents two attack strategies designed to evade detection in ADMM-based systems by preventing significant changes to the residual during the attacked iteration. While many detection algorithms focus on identifying false data injection through residual changes, we show that our attacks remain undetected by keeping the residual largely unchanged. The first strategy uses a random starting point combined with Gram-Schmidt orthogonalization to ensure stealth, with potential for refinement by enhancing the orthogonal component to increase system disruption. The second strategy builds on the first, targeting financial gains by manipulating reactive power and pushing the system to its upper voltage limit, exploiting operational constraints. The effectiveness of the proposed attack-resilient mechanism is demonstrated through case studies on the IEEE 14-bus system. A comparison of the two strategies, along with commonly used naive attacks, reveals trade-offs between simplicity, detectability, and effectiveness, providing insights into ADMM system vulnerabilities. These findings underscore the need for more robust monitoring algorithms to protect against advanced attack strategies.
Related papers
- Hierarchical Entropy Disruption for Ransomware Detection: A Computationally-Driven Framework [0.0]
Monitoring entropy variations offers an alternative approach to identifying unauthorized data modifications.<n>A framework leveraging hierarchical entropy disruption was introduced to analyze deviations in entropy distributions.<n> evaluating the framework across multiple ransomware variants demonstrated its capability to achieve high detection accuracy.
arXiv Detail & Related papers (2025-02-12T23:29:06Z) - LLM-based Continuous Intrusion Detection Framework for Next-Gen Networks [0.7100520098029439]
The framework employs a transformer encoder architecture, which captures hidden patterns in a bidirectional manner to differentiate between malicious and legitimate traffic.
The system incrementally identifies unknown attack types by leveraging a Gaussian Mixture Model (GMM) to cluster features derived from high-dimensional BERT embeddings.
Even after integrating additional unknown attack clusters, the framework continues to perform at a high level, achieving 95.6% in both classification accuracy and recall.
arXiv Detail & Related papers (2024-11-04T18:12:14Z) - Slot: Provenance-Driven APT Detection through Graph Reinforcement Learning [24.84110719035862]
Advanced Persistent Threats (APTs) represent sophisticated cyberattacks characterized by their ability to remain undetected for extended periods.<n>We propose Slot, an advanced APT detection approach based on provenance graphs and graph reinforcement learning.<n>We show Slot's outstanding accuracy, efficiency, adaptability, and robustness in APT detection, with most metrics surpassing state-of-the-art methods.
arXiv Detail & Related papers (2024-10-23T14:28:32Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Variation Enhanced Attacks Against RRAM-based Neuromorphic Computing
System [14.562718993542964]
We propose two types of hardware-aware attack methods with respect to different attack scenarios and objectives.
The first is adversarial attack, VADER, which perturbs the input samples to mislead the prediction of neural networks.
The second is fault injection attack, EFI, which perturbs the network parameter space such that a specified sample will be classified to a target label.
arXiv Detail & Related papers (2023-02-20T10:57:41Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Audio Anti-spoofing Using a Simple Attention Module and Joint
Optimization Based on Additive Angular Margin Loss and Meta-learning [43.519717601587864]
This study introduces a simple attention module to infer 3-dim attention weights for the feature map in a convolutional layer.
We propose a joint optimization approach based on the weighted additive angular margin loss for binary classification.
Our proposed approach delivers a competitive result with a pooled EER of 0.99% and min t-DCF of 0.0289.
arXiv Detail & Related papers (2022-11-17T21:25:29Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.