Is Stochastic Mirror Descent Vulnerable to Adversarial Delay Attacks? A
Traffic Assignment Resilience Study
- URL: http://arxiv.org/abs/2304.01161v1
- Date: Mon, 3 Apr 2023 17:28:24 GMT
- Title: Is Stochastic Mirror Descent Vulnerable to Adversarial Delay Attacks? A
Traffic Assignment Resilience Study
- Authors: Yunian Pan, Tao Li, and Quanyan Zhu
- Abstract summary: We show that learning-based INS infrastructures can achieve Wardrop Non-equilibrium even when experiencing a certain period of disruption in the information structure.
These findings provide valuable insights for designing defense mechanisms against possible jamming attacks across different layers of the transportation ecosystem.
- Score: 20.11993437283895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: \textit{Intelligent Navigation Systems} (INS) are exposed to an increasing
number of informational attack vectors, which often intercept through the
communication channels between the INS and the transportation network during
the data collecting process. To measure the resilience of INS, we use the
concept of a Wardrop Non-Equilibrium Solution (WANES), which is characterized
by the probabilistic outcome of learning within a bounded number of
interactions. By using concentration arguments, we have discovered that any
bounded feedback delaying attack only degrades the systematic performance up to
order $\tilde{\mathcal{O}}(\sqrt{{d^3}{T^{-1}}})$ along the traffic flow
trajectory within the Delayed Mirror Descent (DMD) online-learning framework.
This degradation in performance can occur with only mild assumptions imposed.
Our result implies that learning-based INS infrastructures can achieve Wardrop
Non-equilibrium even when experiencing a certain period of disruption in the
information structure. These findings provide valuable insights for designing
defense mechanisms against possible jamming attacks across different layers of
the transportation ecosystem.
Related papers
- Mechanistic Analysis of Circuit Preservation in Federated Learning [0.3823356975862005]
Federated Learning (FL) enables collaborative training of models on decentralized data, but its performance degrades significantly under Non-IID data conditions.<n>This paper investigates the canonical FedAvg algorithm through the lens of Mechanistic Interpretability (MI) to diagnose this failure mode.
arXiv Detail & Related papers (2025-12-28T19:03:14Z) - Toward Real-World IoT Security: Concept Drift-Resilient IoT Botnet Detection via Latent Space Representation Learning and Alignment [2.5782420501870296]
This paper proposes a scalable framework for adaptive IoT threat detection.<n>An alignment model maps incoming traffic to the learned historical latent space prior to classification.<n>To capture inter-instance relationships among attack samples, the low-dimensional latent representations are transformed into a graph-structured format.
arXiv Detail & Related papers (2025-12-27T06:13:19Z) - RAG-targeted Adversarial Attack on LLM-based Threat Detection and Mitigation Framework [0.19116784879310025]
The rapid expansion of the Internet of Things (IoT) is reshaping communication and operational practices across industries, but it also broadens the attack surface and increases susceptibility to security breaches.<n>Artificial Intelligence has become a valuable solution in securing IoT networks, with Large Language Models (LLMs) enabling automated attack behavior analysis and mitigation suggestion.<n>We attack an LLM-based IoT attack analysis and mitigation framework to test its adversarial robustness.
arXiv Detail & Related papers (2025-11-09T03:50:17Z) - Alignment Tipping Process: How Self-Evolution Pushes LLM Agents Off the Rails [103.05296856071931]
We identify the Alignment Tipping Process (ATP), a critical post-deployment risk unique to self-evolving Large Language Model (LLM) agents.<n>ATP arises when continual interaction drives agents to abandon alignment constraints established during training in favor of reinforced, self-interested strategies.<n>Our experiments show that alignment benefits erode rapidly under self-evolution, with initially aligned models converging toward unaligned states.
arXiv Detail & Related papers (2025-10-06T14:48:39Z) - Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Toward Realistic Adversarial Attacks in IDS: A Novel Feasibility Metric for Transferability [0.0]
Transferability-based adversarial attacks exploit the ability of adversarial examples to deceive a specific source Intrusion Detection System (IDS) model.
These attacks exploit common vulnerabilities in machine learning models to bypass security measures and compromise systems.
This paper analyzes the core factors that contribute to transferability, including feature alignment, model architectural similarity, and overlap in the data distributions that each IDS examines.
arXiv Detail & Related papers (2025-04-11T12:15:03Z) - Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Can Active Sampling Reduce Causal Confusion in Offline Reinforcement
Learning? [58.942118128503104]
Causal confusion is a phenomenon where an agent learns a policy that reflects imperfect spurious correlations in the data.
This phenomenon is particularly pronounced in domains such as robotics.
In this paper, we study causal confusion in offline reinforcement learning.
arXiv Detail & Related papers (2023-12-28T17:54:56Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Physical Passive Patch Adversarial Attacks on Visual Odometry Systems [6.391337032993737]
We study patch adversarial attacks on visual odometry-based autonomous navigation systems.
We show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene.
arXiv Detail & Related papers (2022-07-11T14:41:06Z) - GFCL: A GRU-based Federated Continual Learning Framework against
Adversarial Attacks in IoV [3.3758186776249923]
Deep Reinforcement Learning (DRL) is one of the widely used ML designs in Internet of Vehicles (IoV) applications.
Standard ML security techniques are not effective in DRL where the algorithm learns to solve sequential decision-making through continuous interaction with the environment.
We propose a Gated Recurrent Unit (GRU)-based federated continual learning (GFCL) anomaly detection framework.
arXiv Detail & Related papers (2022-04-23T06:56:37Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Transferable Perturbations of Deep Feature Distributions [102.94094966908916]
This work presents a new adversarial attack based on the modeling and exploitation of class-wise and layer-wise deep feature distributions.
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.
arXiv Detail & Related papers (2020-04-27T00:32:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.