Optimal Welfare in Noncooperative Network Formation under Attack
- URL: http://arxiv.org/abs/2511.10845v1
- Date: Thu, 13 Nov 2025 23:12:10 GMT
- Title: Optimal Welfare in Noncooperative Network Formation under Attack
- Authors: Natan Doubez, Pascal Lenzner, Marcus Wunderlich,
- Abstract summary: Communication networks are essential for our economy and our everyday lives.<n>These networks are not controlled by a single authority, but instead consist of many independently administrated entities.<n>We show that networks created by selfish agents can resist attacks of a large class of potential attackers.
- Score: 5.279509789811735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication networks are essential for our economy and our everyday lives. This makes them lucrative targets for attacks. Today, we see an ongoing battle between criminals that try to disrupt our key communication networks and security professionals that try to mitigate these attacks. However, today's networks, like the Internet or peer-to-peer networks among smart devices, are not controlled by a single authority, but instead consist of many independently administrated entities that are interconnected. Thus, both the decisions of how to interconnect and how to secure against potential attacks are taken in a decentralized way by selfish agents. This strategic setting, with agents that want to interconnect and potential attackers that want to disrupt the network, was captured via an influential game-theoretic model by Goyal, Jabbari, Kearns, Khanna, and Morgenstern (WINE 2016). We revisit this model and show improved tight bounds on the achieved robustness of networks created by selfish agents. As our main result, we show that such networks can resist attacks of a large class of potential attackers, i.e., these networks maintain asymptotically optimal welfare post attack. This improves several bounds and resolves an open problem. Along the way, we show the counter-intuitive result, that attackers that aim at minimizing the social welfare post attack do not actually inflict the greatest possible damage.
Related papers
- Capability-Based Scaling Laws for LLM Red-Teaming [71.89259138609965]
Traditional prompt-engineering approaches may prove ineffective once red-teaming turns into a weak-to-strong problem.<n>We evaluate more than 500 attacker-target pairs using LLM-based jailbreak attacks that mimic human red-teamers.<n>We derive a jailbreaking scaling law that predicts attack success for a fixed target based on attacker-target capability gap.
arXiv Detail & Related papers (2025-05-26T16:05:41Z) - Multi-Objective Reinforcement Learning for Automated Resilient Cyber Defence [0.0]
Cyber-attacks pose a security threat to military command and control networks, Intelligence, Surveillance, and Reconnaissance (ISR) systems, and civilian critical national infrastructure.<n>The use of artificial intelligence and autonomous agents in these attacks increases the scale, range, and complexity of this threat and the subsequent disruption they cause.<n> Autonomous Cyber Defence (ACD) agents aim to mitigate this threat by responding at machine speed and at the scale required to address the problem.
arXiv Detail & Related papers (2024-11-26T16:51:52Z) - Impact of Conflicting Transactions in Blockchain: Detecting and Mitigating Potential Attacks [0.2982610402087727]
Conflicting transactions within blockchain networks pose performance challenges and introduce security vulnerabilities.<n>We propose a set of countermeasures for mitigating these attacks.<n>Our findings emphasize the critical importance of actively managing conflicting transactions to reinforce blockchain security and performance.
arXiv Detail & Related papers (2024-07-30T17:16:54Z) - TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep
Neural Network Systems [15.982408142401072]
Deep neural networks are vulnerable to attacks from adversarial inputs and, more recently, Trojans to misguide or hijack the decision of the model.
A TnT is universal because any input image captured with a TnT in the scene will: i) misguide a network (untargeted attack); or ii) force the network to make a malicious decision.
We show a generalization of the attack to create patches achieving higher attack success rates than existing state-of-the-art methods.
arXiv Detail & Related papers (2021-11-19T01:35:10Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attack on Graph Neural Networks as An Influence Maximization
Problem [12.88476464580968]
Graph neural networks (GNNs) have attracted increasing interests.
There is an urgent need for understanding the robustness of GNNs under adversarial attacks.
arXiv Detail & Related papers (2021-06-21T00:47:44Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.