Examining Machine Learning for 5G and Beyond through an Adversarial Lens
- URL: http://arxiv.org/abs/2009.02473v1
- Date: Sat, 5 Sep 2020 06:30:26 GMT
- Title: Examining Machine Learning for 5G and Beyond through an Adversarial Lens
- Authors: Muhammad Usama, Rupendra Nath Mitra, Inaam Ilahi, Junaid Qadir, and
Mahesh K. Marina
- Abstract summary: We present a cautionary perspective on the use of AI/ML in the 5G context by highlighting the adversarial dimension spanning multiple types of ML.
We also discuss approaches to mitigate this adversarial ML risk, offer guidelines for evaluating the robustness of ML models, and call attention to issues surrounding ML oriented research in 5G more generally.
- Score: 3.2410256314561092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spurred by the recent advances in deep learning to harness rich information
hidden in large volumes of data and to tackle problems that are hard to
model/solve (e.g., resource allocation problems), there is currently tremendous
excitement in the mobile networks domain around the transformative potential of
data-driven AI/ML based network automation, control and analytics for 5G and
beyond. In this article, we present a cautionary perspective on the use of
AI/ML in the 5G context by highlighting the adversarial dimension spanning
multiple types of ML (supervised/unsupervised/RL) and support this through
three case studies. We also discuss approaches to mitigate this adversarial ML
risk, offer guidelines for evaluating the robustness of ML models, and call
attention to issues surrounding ML oriented research in 5G more generally.
Related papers
- Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5 [61.787178868669265]
This technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R&D, and self-replication.<n>This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.
arXiv Detail & Related papers (2026-02-16T04:30:06Z) - From Description to Detection: LLM based Extendable O-RAN Compliant Blind DoS Detection in 5G and Beyond [10.627289027347274]
Vulnerability in control-plane protocols pose significant security threats, such as Blind Denial of Service (DoS) attacks.<n>We propose a novel anomaly detection framework that leverages the capabilities of Large Language Models (LLMs) in zero-shot mode.<n>We show that detection quality relies on the semantic completeness of the description rather than its phrasing or length.
arXiv Detail & Related papers (2025-10-08T00:13:02Z) - AI/ML Life Cycle Management for Interoperable AI Native RAN [50.61227317567369]
Artificial intelligence (AI) and machine learning (ML) models are rapidly permeating the 5G Radio Access Network (RAN)<n>These developments lay the foundation for AI-native transceivers as a key enabler for 6G.
arXiv Detail & Related papers (2025-07-24T16:04:59Z) - Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs [58.24692529185971]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - LLMs' Suitability for Network Security: A Case Study of STRIDE Threat Modeling [1.1970409518725493]
We examine the suitability of Large Language Models (LLMs) in network security.<n>We use four prompting techniques with five LLMs to perform STRIDE classification of 5G threats.<n>We point out key findings and detailed insights along with the explanation of the possible underlying factors.
arXiv Detail & Related papers (2025-05-07T03:37:49Z) - Survey on AI-Generated Media Detection: From Non-MLLM to MLLM [51.91311158085973]
Methods for detecting AI-generated media have evolved rapidly.
General-purpose detectors based on MLLMs integrate authenticity verification, explainability, and localization capabilities.
Ethical and security considerations have emerged as critical global concerns.
arXiv Detail & Related papers (2025-02-07T12:18:20Z) - Breaking Focus: Contextual Distraction Curse in Large Language Models [68.4534308805202]
We investigate a critical vulnerability in Large Language Models (LLMs)
This phenomenon arises when models fail to maintain consistent performance on questions modified with semantically coherent but irrelevant context.
We propose an efficient tree-based search methodology to automatically generate CDV examples.
arXiv Detail & Related papers (2025-02-03T18:43:36Z) - MMAU: A Holistic Benchmark of Agent Capabilities Across Diverse Domains [54.117238759317004]
Massive Multitask Agent Understanding (MMAU) benchmark features comprehensive offline tasks that eliminate the need for complex environment setups.
It evaluates models across five domains, including Tool-use, Directed Acyclic Graph (DAG) QA, Data Science and Machine Learning coding, Contest-level programming and Mathematics.
With a total of 20 meticulously designed tasks encompassing over 3K distinct prompts, MMAU provides a comprehensive framework for evaluating the strengths and limitations of LLM agents.
arXiv Detail & Related papers (2024-07-18T00:58:41Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Learn to Unlearn: A Survey on Machine Unlearning [29.077334665555316]
This article presents a review of recent machine unlearning techniques, verification mechanisms, and potential attacks.
We highlight emerging challenges and prospective research directions.
We aim for this paper to provide valuable resources for integrating privacy, equity, andresilience into ML systems.
arXiv Detail & Related papers (2023-05-12T14:28:02Z) - Review on the Feasibility of Adversarial Evasion Attacks and Defenses
for Network Intrusion Detection Systems [0.7829352305480285]
Recent research raises many concerns in the cybersecurity field.
An increasing number of researchers are studying the feasibility of such attacks on security systems based on machine learning algorithms.
arXiv Detail & Related papers (2023-03-13T11:00:05Z) - Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
Examples [1.491109220586182]
5G networks must support billions of heterogeneous devices while guaranteeing optimal Quality of Service (QoS)
5G context is exposed to another type of adversarial ML attacks that cannot be formalized with existing threat models.
We propose a novel adversarial ML threat model that is particularly suited to 5G scenarios.
Our attacks affect both the training and the inference stages, can degrade the performance of state-of-the-art ML systems, and have a lower entry barrier than previous attacks.
arXiv Detail & Related papers (2022-07-04T15:52:54Z) - A Survey on Machine Learning-based Misbehavior Detection Systems for 5G
and Beyond Vehicular Networks [4.410803831098062]
Integrating V2X with 5G has enabled ultra-low latency and high-reliability V2X communications.
Attacks have become more aggressive, and attackers have become more strategic.
Many V2X Misbehavior Detection Systems (MDSs) have adopted this paradigm.
Yet, analyzing these systems is a research gap, and developing effective ML-based MDSs is still an open issue.
arXiv Detail & Related papers (2022-01-25T17:48:57Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.