Optimizing Preventive and Reactive Defense Resource Allocation with Uncertain Sensor Signals
- URL: http://arxiv.org/abs/2508.02881v2
- Date: Thu, 07 Aug 2025 03:34:04 GMT
- Title: Optimizing Preventive and Reactive Defense Resource Allocation with Uncertain Sensor Signals
- Authors: Faezeh Shojaeighadikolaei, Shouhuai Xu, Keith Paarporn,
- Abstract summary: We show that the optimal investment in preventive resources increases, and thus reactive resource investment decreases, with higher sensor quality.<n>We also show that the defender's performance improvement, relative to a baseline of no sensors employed, is maximal when the attacker can only achieve low attack success probabilities.
- Score: 6.243678490046079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyber attacks continue to be a cause of concern despite advances in cyber defense techniques. Although cyber attacks cannot be fully prevented, standard decision-making frameworks typically focus on how to prevent them from succeeding, without considering the cost of cleaning up the damages incurred by successful attacks. This motivates us to investigate a new resource allocation problem formulated in this paper: The defender must decide how to split its investment between preventive defenses, which aim to harden nodes from attacks, and reactive defenses, which aim to quickly clean up the compromised nodes. This encounters a challenge imposed by the uncertainty associated with the observation, or sensor signal, whether a node is truly compromised or not; this uncertainty is real because attack detectors are not perfect. We investigate how the quality of sensor signals impacts the defender's strategic investment in the two types of defense, and ultimately the level of security that can be achieved. In particular, we show that the optimal investment in preventive resources increases, and thus reactive resource investment decreases, with higher sensor quality. We also show that the defender's performance improvement, relative to a baseline of no sensors employed, is maximal when the attacker can only achieve low attack success probabilities.
Related papers
- To Defend Against Cyber Attacks, We Must Teach AI Agents to Hack [14.333336222782856]
AI agents automate vulnerability discovery and exploitation across thousands of targets.<n>Current developers focus on preventing misuse through data filtering, safety alignment, and output guardrails.<n>We argue that AI-agent-driven cyber attacks are inevitable, requiring a fundamental shift in defensive strategy.
arXiv Detail & Related papers (2026-02-01T12:37:55Z) - The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections [74.60337113759313]
Current defenses against jailbreaks and prompt injections are typically evaluated against a static set of harmful attack strings.<n>We argue that this evaluation process is flawed. Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design.
arXiv Detail & Related papers (2025-10-10T05:51:04Z) - Benchmarking Misuse Mitigation Against Covert Adversaries [80.74502950627736]
Existing language model safety evaluations focus on overt attacks and low-stakes tasks.<n>We develop Benchmarks for Stateful Defenses (BSD), a data generation pipeline that automates evaluations of covert attacks and corresponding defenses.<n>Our evaluations indicate that decomposition attacks are effective misuse enablers, and highlight stateful defenses as a countermeasure.
arXiv Detail & Related papers (2025-06-06T17:33:33Z) - A Quantal Response Analysis of Defender-Attacker Sequential Security Games [1.3022753212679383]
We explore a scenario involving two sites and a sequential game between a defender and an attacker.
The attacker's objective is to target the site that maximizes the expected loss for the defender, taking into account the defender's security investments.
We consider quantal behavioral bias, where humans make errors in selecting efficient (pure) strategies.
arXiv Detail & Related papers (2024-08-02T00:40:48Z) - Fast Preemption: Forward-Backward Cascade Learning for Efficient and Transferable Preemptive Adversarial Defense [13.252842556505174]
Fast Preemption is a novel preemptive adversarial defense that overcomes efficiency challenges while achieving state-of-the-art robustness and transferability.<n>Executing in just three iterations, Fast Preemption outperforms existing training-time, test-time, and preemptive defenses.
arXiv Detail & Related papers (2024-07-22T10:23:44Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - OASIS: Offsetting Active Reconstruction Attacks in Federated Learning [14.644814818768172]
Federated Learning (FL) has garnered significant attention for its potential to protect user privacy.
Recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks.
We propose a defense mechanism based on image augmentation that effectively counteracts active reconstruction attacks.
arXiv Detail & Related papers (2023-11-23T00:05:17Z) - Stealthy Backdoor Attack via Confidence-driven Sampling [49.72680157684523]
Backdoor attacks aim to surreptitiously insert malicious triggers into DNN models, granting unauthorized control during testing scenarios.<n>Existing methods lack robustness against defense strategies and predominantly focus on enhancing trigger stealthiness while randomly selecting poisoned samples.<n>We introduce a straightforward yet highly effective sampling methodology that leverages confidence scores. Specifically, it selects samples with lower confidence scores, significantly increasing the challenge for defenders in identifying and countering these attacks.
arXiv Detail & Related papers (2023-10-08T18:57:36Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - GUARD: Graph Universal Adversarial Defense [54.81496179947696]
We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
arXiv Detail & Related papers (2022-04-20T22:18:12Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Mitigating Gradient-based Adversarial Attacks via Denoising and
Compression [7.305019142196582]
Gradient-based adversarial attacks on deep neural networks pose a serious threat.
They can be deployed by adding imperceptible perturbations to the test data of any network.
Denoising and dimensionality reduction are two distinct methods that have been investigated to combat such attacks.
arXiv Detail & Related papers (2021-04-03T22:57:01Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.