Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
Examples
- URL: http://arxiv.org/abs/2207.01531v1
- Date: Mon, 4 Jul 2022 15:52:54 GMT
- Title: Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
Examples
- Authors: Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova, Pavel Laskov
- Abstract summary: 5G networks must support billions of heterogeneous devices while guaranteeing optimal Quality of Service (QoS)
5G context is exposed to another type of adversarial ML attacks that cannot be formalized with existing threat models.
We propose a novel adversarial ML threat model that is particularly suited to 5G scenarios.
Our attacks affect both the training and the inference stages, can degrade the performance of state-of-the-art ML systems, and have a lower entry barrier than previous attacks.
- Score: 1.491109220586182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fifth Generation (5G) networks must support billions of heterogeneous devices
while guaranteeing optimal Quality of Service (QoS). Such requirements are
impossible to meet with human effort alone, and Machine Learning (ML)
represents a core asset in 5G. ML, however, is known to be vulnerable to
adversarial examples; moreover, as our paper will show, the 5G context is
exposed to a yet another type of adversarial ML attacks that cannot be
formalized with existing threat models. Proactive assessment of such risks is
also challenging due to the lack of ML-powered 5G equipment available for
adversarial ML research.
To tackle these problems, we propose a novel adversarial ML threat model that
is particularly suited to 5G scenarios, and is agnostic to the precise function
solved by ML. In contrast to existing ML threat models, our attacks do not
require any compromise of the target 5G system while still being viable due to
the QoS guarantees and the open nature of 5G networks. Furthermore, we propose
an original framework for realistic ML security assessments based on public
data. We proactively evaluate our threat model on 6 applications of ML
envisioned in 5G. Our attacks affect both the training and the inference
stages, can degrade the performance of state-of-the-art ML systems, and have a
lower entry barrier than previous attacks.
Related papers
- Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models [54.61181161508336]
We introduce Multi-Faceted Attack (MFA), a framework that exposes general safety vulnerabilities in leading defense-equipped Vision-Language Models (VLMs)<n>The core component of MFA is the Attention-Transfer Attack (ATA), which hides harmful instructions inside a meta task with competing objectives.<n>MFA achieves a 58.5% success rate and consistently outperforms existing methods.
arXiv Detail & Related papers (2025-11-20T07:12:54Z) - Integrity Under Siege: A Rogue gNodeB's Manipulation of 5G Network Slice Allocation [2.90110037823427]
5G networks, with network slicing as a cornerstone technology, promises customized, high-performance services, but also introduces novel attack surfaces beyond traditional threats.<n>This article investigates a critical and underexplored integrity vulnerability: the manipulation of network slice allocation to compromise Quality of Service (QoS) and resource integrity.<n>We show how a rogue gNodeB acting as a Man-in-the-Middle can exploit protocol weaknesses to forge slice requests and hijack a User Equipment's connection.
arXiv Detail & Related papers (2025-11-05T09:26:39Z) - From Description to Detection: LLM based Extendable O-RAN Compliant Blind DoS Detection in 5G and Beyond [10.627289027347274]
Vulnerability in control-plane protocols pose significant security threats, such as Blind Denial of Service (DoS) attacks.<n>We propose a novel anomaly detection framework that leverages the capabilities of Large Language Models (LLMs) in zero-shot mode.<n>We show that detection quality relies on the semantic completeness of the description rather than its phrasing or length.
arXiv Detail & Related papers (2025-10-08T00:13:02Z) - LLMs' Suitability for Network Security: A Case Study of STRIDE Threat Modeling [1.1970409518725493]
We examine the suitability of Large Language Models (LLMs) in network security.<n>We use four prompting techniques with five LLMs to perform STRIDE classification of 5G threats.<n>We point out key findings and detailed insights along with the explanation of the possible underlying factors.
arXiv Detail & Related papers (2025-05-07T03:37:49Z) - An LLM-based Self-Evolving Security Framework for 6G Space-Air-Ground Integrated Networks [49.605335601285496]
6G space-air-ground integrated networks (SAGINs) offer ubiquitous coverage for various mobile applications.<n>We propose a novel security framework for SAGINs based on Large Language Models (LLMs)<n>Our framework produces highly accurate security strategies that remain robust against a variety of unknown attacks.
arXiv Detail & Related papers (2025-05-06T04:14:13Z) - Jailbreaking as a Reward Misspecification Problem [80.52431374743998]
We propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process.
We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness.
We present ReMiss, a system for automated red teaming that generates adversarial prompts in a reward-misspecified space.
arXiv Detail & Related papers (2024-06-20T15:12:27Z) - Enhancing O-RAN Security: Evasion Attacks and Robust Defenses for Graph Reinforcement Learning-based Connection Management [5.791956438741676]
We study various attacks and defenses on machine learning (ML) models in Open Radio Access Networks (O-RAN)
A comprehensive modeling of the security threats and the demonstration of adversarial attacks and defenses is still in its nascent stages.
We develop and demonstrate robust training-based defenses against the challenging physical/jamming-based attacks and show a 15% improvement in the coverage rates when compared to employing no defense over a range of noise budgets.
arXiv Detail & Related papers (2024-05-06T22:27:24Z) - Penetration Testing of 5G Core Network Web Technologies [53.89039878885825]
We present the first security assessment of the 5G core from a web security perspective.
We use the STRIDE threat modeling approach to define a complete list of possible threat vectors and associated attacks.
Our analysis shows that all these cores are vulnerable to at least two of our identified attack vectors.
arXiv Detail & Related papers (2024-03-04T09:27:11Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - On Evaluating Adversarial Robustness of Large Vision-Language Models [64.66104342002882]
We evaluate the robustness of large vision-language models (VLMs) in the most realistic and high-risk setting.
In particular, we first craft targeted adversarial examples against pretrained models such as CLIP and BLIP.
Black-box queries on these VLMs can further improve the effectiveness of targeted evasion.
arXiv Detail & Related papers (2023-05-26T13:49:44Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Examining Machine Learning for 5G and Beyond through an Adversarial Lens [3.2410256314561092]
We present a cautionary perspective on the use of AI/ML in the 5G context by highlighting the adversarial dimension spanning multiple types of ML.
We also discuss approaches to mitigate this adversarial ML risk, offer guidelines for evaluating the robustness of ML models, and call attention to issues surrounding ML oriented research in 5G more generally.
arXiv Detail & Related papers (2020-09-05T06:30:26Z) - Artificial Intelligence and Machine Learning in 5G Network Security:
Opportunities, advantages, and future research trends [5.431496585727341]
5G networks' primary selling point has been higher data rates and speed.
As 5G networks' primary selling point has been higher data rates and speed, it will be difficult to tackle wide range of threats.
This article presents AI and ML driven applications for 5G network security.
arXiv Detail & Related papers (2020-07-09T01:02:13Z) - Integrated Methodology to Cognitive Network Slice Management in
Virtualized 5G Networks [3.8743565255416983]
5G networks are envisioned to be fully autonomous in accordance to the ETSI-defined Zero touch network and Service Management (ZSM) concept.
Purpose-specific Machine Learning (ML) models can be used to manage and control physical as well as virtual network resources in a way that is fully compliant to slice Service Level Agreements (SLAs)
arXiv Detail & Related papers (2020-05-11T01:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.