Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks
- URL: http://arxiv.org/abs/2201.06093v1
- Date: Sun, 16 Jan 2022 17:01:38 GMT
- Title: Adversarial Machine Learning Threat Analysis in Open Radio Access
Networks
- Authors: Ron Bitton, Dan Avraham, Eitan Klevansky, Dudu Mimran, Oleg Brodt,
Heiko Lehmann, Yuval Elovici, and Asaf Shabtai
- Abstract summary: The Open Radio Access Network (O-RAN) is a new, open, adaptive, and intelligent RAN architecture.
In this paper, we present a systematic adversarial machine learning threat analysis for the O-RAN.
- Score: 37.23982660941893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Open Radio Access Network (O-RAN) is a new, open, adaptive, and
intelligent RAN architecture. Motivated by the success of artificial
intelligence in other domains, O-RAN strives to leverage machine learning (ML)
to automatically and efficiently manage network resources in diverse use cases
such as traffic steering, quality of experience prediction, and anomaly
detection. Unfortunately, ML-based systems are not free of vulnerabilities;
specifically, they suffer from a special type of logical vulnerabilities that
stem from the inherent limitations of the learning algorithms. To exploit these
vulnerabilities, an adversary can utilize an attack technique referred to as
adversarial machine learning (AML). These special type of attacks has already
been demonstrated in recent researches. In this paper, we present a systematic
AML threat analysis for the O-RAN. We start by reviewing relevant ML use cases
and analyzing the different ML workflow deployment scenarios in O-RAN. Then, we
define the threat model, identifying potential adversaries, enumerating their
adversarial capabilities, and analyzing their main goals. Finally, we explore
the various AML threats in the O-RAN and review a large number of attacks that
can be performed to materialize these threats and demonstrate an AML attack on
a traffic steering model.
Related papers
- Visually Analyze SHAP Plots to Diagnose Misclassifications in ML-based Intrusion Detection [0.3199881502576702]
Intrusion detection system (IDS) can essentially mitigate threats by providing alerts.
In order to detect these threats various machine learning (ML) and deep learning (DL) models have been proposed.
In this paper, we propose an explainable artificial intelligence (XAI) based visual analysis approach using overlapping SHAP plots.
arXiv Detail & Related papers (2024-11-04T23:08:34Z) - Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - Detecting and Understanding Vulnerabilities in Language Models via Mechanistic Interpretability [44.99833362998488]
Large Language Models (LLMs) have shown impressive performance across a wide range of tasks.
LLMs in particular are known to be vulnerable to adversarial attacks, where an imperceptible change to the input can mislead the output of the model.
We propose a method, based on Mechanistic Interpretability (MI) techniques, to guide this process.
arXiv Detail & Related papers (2024-07-29T09:55:34Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - SoK: Realistic Adversarial Attacks and Defenses for Intelligent Network
Intrusion Detection [0.0]
This paper consolidates and summarizes the state-of-the-art adversarial learning approaches that can generate realistic examples.
It defines the fundamental properties that are required for an adversarial example to be realistic.
It provides guidelines for researchers to ensure that their future experiments are adequate for a real communication network.
arXiv Detail & Related papers (2023-08-13T17:23:36Z) - Threat Assessment in Machine Learning based Systems [12.031113181911627]
We conduct an empirical study of threats reported against Machine Learning-based systems.
The study is based on 89 real-world ML attack scenarios from the MITRE's ATLAS database, the AI Incident Database, and the literature.
Results show that convolutional neural networks were one of the most targeted models among the attack scenarios.
arXiv Detail & Related papers (2022-06-30T20:19:50Z) - Zero-shot learning approach to adaptive Cybersecurity using Explainable
AI [0.5076419064097734]
We present a novel approach to handle the alarm flooding problem faced by Cybersecurity systems like security information and event management (SIEM) and intrusion detection (IDS)
We apply a zero-shot learning method to machine learning (ML) by leveraging explanations for predictions of anomalies generated by a ML model.
In this approach, without any prior knowledge of attack, we try to identify it, decipher the features that contribute to classification and try to bucketize the attack in a specific category.
arXiv Detail & Related papers (2021-06-21T06:29:13Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.