Manipulating OpenFlow Link Discovery Packet Forwarding for Topology Poisoning
- URL: http://arxiv.org/abs/2408.16940v2
- Date: Sat, 12 Oct 2024 19:04:21 GMT
- Title: Manipulating OpenFlow Link Discovery Packet Forwarding for Topology Poisoning
- Authors: Mingming Chen, Thomas La Porta, Teryl Taylor, Frederico Araujo, Trent Jaeger,
- Abstract summary: We introduce Marionette, a new topology poisoning technique that manipulates OpenFlow link forwarding to alter topology information.
Our approach exposes an overlooked yet widespread attack vector.
Marionette successfully attacks five open-source controllers and nine OpenFlow-based discovery protocols.
- Score: 7.162877379128359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Software-defined networking (SDN) is a centralized, dynamic, and programmable network management technology that enables flexible traffic control and scalability. SDN facilitates network administration through a centralized view of the underlying physical topology; tampering with this topology view can result in catastrophic damage to network management and security. To underscore this issue, we introduce Marionette, a new topology poisoning technique that manipulates OpenFlow link discovery packet forwarding to alter topology information. Our approach exposes an overlooked yet widespread attack vector, distinguishing itself from traditional link fabrication attacks that tamper, spoof, or relay discovery packets at the data plane. Unlike localized attacks observed in existing methods, our technique introduces a globalized topology poisoning attack that leverages control privileges. Marionette implements a reinforcement learning algorithm to compute a poisoned topology target, and injects flow entries to achieve a long-lived stealthy attack. Our evaluation shows that Marionette successfully attacks five open-source controllers and nine OpenFlow-based discovery protocols. Marionette overcomes the state-of-the-art topology poisoning defenses, showcasing a new class of topology poisoning that initiates on the control plane. This security vulnerability was ethically disclosed to OpenDaylight, and CVE-2024-37018 has been assigned.
Related papers
- PCAP-Backdoor: Backdoor Poisoning Generator for Network Traffic in CPS/IoT Environments [0.6629765271909503]
We introduce textttPCAP-Backdoor, a novel technique that facilitates backdoor poisoning attacks on PCAP datasets.
Experiments on real-world Cyber-Physical Systems (CPS) and Internet of Things (IoT) network traffic datasets demonstrate that attackers can effectively backdoor a model by poisoning as little as 1% or less of the entire training dataset.
arXiv Detail & Related papers (2025-01-26T15:49:34Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - A Novel Supervised Deep Learning Solution to Detect Distributed Denial
of Service (DDoS) attacks on Edge Systems using Convolutional Neural Networks
(CNN) [0.41436032949434404]
This project presents a novel deep learning-based approach for detecting DDoS attacks in network traffic.
The algorithm employed in this study exploits the properties of Convolutional Neural Networks (CNN) and common deep learning algorithms.
The results of this study demonstrate the effectiveness of the proposed algorithm in detecting DDOS attacks, achieving an accuracy of.9883 on 2000 unseen flows in network traffic.
arXiv Detail & Related papers (2023-09-11T17:37:35Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Poisoning Network Flow Classifiers [10.055241826257083]
This paper focuses on poisoning attacks, specifically backdoor attacks, against network traffic flow classifiers.
We investigate the challenging scenario of clean-label poisoning where the adversary's capabilities are constrained to tampering only with the training data.
We describe a trigger crafting strategy that leverages model interpretability techniques to generate trigger patterns that are effective even at very low poisoning rates.
arXiv Detail & Related papers (2023-06-02T16:24:15Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.