Is Crunching Public Data the Right Approach to Detect BGP Hijacks?
- URL: http://arxiv.org/abs/2507.20434v1
- Date: Sun, 27 Jul 2025 22:35:21 GMT
- Title: Is Crunching Public Data the Right Approach to Detect BGP Hijacks?
- Authors: Alessandro Giaconia, Muoi Tran, Laurent Vanbever, Stefano Vissicchio,
- Abstract summary: Border Gateway Protocol (BGP) remains a fragile pillar of Internet routing.<n>Recent approaches like DFOH and BEAM apply machine learning (ML) to analyze data from globally distributed BGP monitors.<n>This paper shows that state-of-the-art hijack detection systems like DFOH and BEAM are vulnerable to data poisoning.
- Score: 46.60173408970299
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Border Gateway Protocol (BGP) remains a fragile pillar of Internet routing. BGP hijacks still occurr daily. While full deployment of Route Origin Validation (ROV) is ongoing, attackers have already adapted, launching post-ROV attacks such as forged-origin hijacks. To detect these, recent approaches like DFOH [Holterbach et al., USENIX NSDI '24] and BEAM [Chen et al., USENIX Security '24] apply machine learning (ML) to analyze data from globally distributed BGP monitors, assuming anomalies will stand out against historical patterns. However, this assumption overlooks a key threat: BGP monitors themselves can be misled by adversaries injecting bogus routes. This paper shows that state-of-the-art hijack detection systems like DFOH and BEAM are vulnerable to data poisoning. Using large-scale BGP simulations, we show that attackers can evade detection with just a handful of crafted announcements beyond the actual hijack. These announcements are indeed sufficient to corrupt the knowledge base used by ML-based defenses and distort the metrics they rely on. Our results highlight a worrying weakness of relying solely on public BGP data.
Related papers
- Exploiting Inaccurate Branch History in Side-Channel Attacks [54.218160467764086]
This paper examines how resource sharing and contention affect two widely implemented but underdocumented features: Bias-Free Branch Prediction and Branch History Speculation.<n>We show that these features can inadvertently modify the Branch History Buffer (BHB) update behavior and create new primitives that trigger malicious mis-speculations.<n>We present three novel attack primitives: two Spectre attacks, namely Spectre-BSE and Spectre-BHS, and a cross-privilege control flow side-channel attack called BiasScope.
arXiv Detail & Related papers (2025-06-08T19:46:43Z) - BEAR: BGP Event Analysis and Reporting [10.153790653358625]
Border Gateway Protocol (BGP) anomalies can divert traffic through unauthorized or inefficient paths, jeopardizing network reliability and security.<n>BGP Event Analysis and Reporting framework generates comprehensive reports explaining detected BGP anomaly events.<n> BEAR achieves 100% accuracy, outperforming Chain-of-Thought and in-context learning baselines.
arXiv Detail & Related papers (2025-06-04T23:34:36Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - Protocol-agnostic and Data-free Backdoor Attacks on Pre-trained Models in RF Fingerprinting [11.586892344905113]
unsupervised pre-trained models (PTMs) offer better generalization and do not require labeled datasets.<n>In this paper, we thoroughly investigate data-free backdoor attacks on such PTMs in RF fingerprinting.<n>By mapping triggers and PORs through backdoor training, we can implant backdoor behaviors into the PTMs.
arXiv Detail & Related papers (2025-05-01T21:55:43Z) - MADE: Graph Backdoor Defense with Masked Unlearning [24.97718571096943]
Graph Neural Networks (GNNs) have garnered significant attention from researchers due to their outstanding performance in handling graph-related tasks.<n>Recent research has demonstrated that GNNs are vulnerable to backdoor attacks, implemented by injecting triggers into the training datasets.<n>This vulnerability poses significant security risks for applications of GNNs in sensitive domains, such as drug discovery.
arXiv Detail & Related papers (2024-11-26T22:50:53Z) - Global BGP Attacks that Evade Route Monitoring [6.108950672801419]
Border Gateway Protocol (BGP) security measures are still in progress.
BGP monitoring continues to play a critical role in protecting the Internet from routing attacks.
We develop a novel attack that can hide itself from all state-of-the-art BGP monitoring systems.
arXiv Detail & Related papers (2024-08-19T00:29:42Z) - Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses [50.53476890313741]
We propose an effective, stealthy, and persistent backdoor attack on FedGL.
We develop a certified defense for any backdoored FedGL model against the trigger with any shape at any location.
Our attack results show our attack can obtain > 90% backdoor accuracy in almost all datasets.
arXiv Detail & Related papers (2024-07-12T02:43:44Z) - Towards a Near-real-time Protocol Tunneling Detector based on Machine Learning Techniques [0.0]
We present a protocol tunneling detector prototype which inspects, in near real time, a company's network traffic using machine learning techniques.
The detector monitors unencrypted network flows and extracts features to detect possible occurring attacks and anomalies.
Results show 97.1% overall accuracy and an F1-score equals to 95.6%.
arXiv Detail & Related papers (2023-09-22T09:08:43Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Weight Poisoning Attacks on Pre-trained Models [103.19413805873585]
We show that it is possible to construct weight poisoning'' attacks where pre-trained weights are injected with vulnerabilities that expose backdoors'' after fine-tuning.
Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat.
arXiv Detail & Related papers (2020-04-14T16:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.