Traffic Modeling for Network Security and Privacy: Challenges Ahead
- URL: http://arxiv.org/abs/2503.22161v1
- Date: Fri, 28 Mar 2025 05:54:17 GMT
- Title: Traffic Modeling for Network Security and Privacy: Challenges Ahead
- Authors: Dinil Mon Divakaran,
- Abstract summary: Traffic analysis using machine learning and deep learning models has made significant progress over the past decades.<n>Recent models address various tasks in network security and privacy, including detection of anomalies and attacks, countering censorship, etc.
- Score: 1.0351405540566638
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traffic analysis using machine learning and deep learning models has made significant progress over the past decades. These models address various tasks in network security and privacy, including detection of anomalies and attacks, countering censorship, etc. They also reveal privacy risks to users as demonstrated by the research on LLM token inference as well as fingerprinting (and counter-fingerprinting) of user-visiting websites, IoT devices, and different applications. However, challenges remain in securing our networks from threats and attacks. After briefly reviewing the tasks and recent ML models in network security and privacy, we discuss the challenges that lie ahead.
Related papers
- A Review of the Duality of Adversarial Learning in Network Intrusion: Attacks and Countermeasures [0.0]
Adversarial attacks, particularly those targeting vulnerabilities in deep learning models, present a nuanced and substantial threat to cybersecurity.<n>Our study delves into adversarial learning threats such as Data Poisoning, Test Time Evasion, and Reverse Engineering.<n>Our research lays the groundwork for strengthening defense mechanisms to address the potential breaches in network security and privacy posed by adversarial attacks.
arXiv Detail & Related papers (2024-12-18T14:21:46Z) - Deep Learning Model Security: Threats and Defenses [25.074630770554105]
Deep learning has transformed AI applications but faces critical security challenges.
This survey examines these vulnerabilities, detailing their mechanisms and impact on model integrity and confidentiality.
The survey concludes with future directions, emphasizing automated defenses, zero-trust architectures, and the security challenges of large AI models.
arXiv Detail & Related papers (2024-12-12T06:04:20Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - A Survey on the Application of Generative Adversarial Networks in Cybersecurity: Prospective, Direction and Open Research Scopes [1.3631461603291568]
Generative Adversarial Networks (GANs) have emerged as powerful solutions for addressing the constantly changing security issues.
This survey studies the significance of the deep learning model, precisely on GANs, in strengthening cybersecurity defenses.
The focus is to examine how GANs can be influential tools to strengthen cybersecurity defenses in these domains.
arXiv Detail & Related papers (2024-07-11T19:51:48Z) - Large Language Models for Cyber Security: A Systematic Literature Review [14.924782327303765]
We conduct a comprehensive review of the literature on the application of Large Language Models in cybersecurity (LLM4Security)
We observe that LLMs are being applied to a wide range of cybersecurity tasks, including vulnerability detection, malware analysis, network intrusion detection, and phishing detection.
Third, we identify several promising techniques for adapting LLMs to specific cybersecurity domains, such as fine-tuning, transfer learning, and domain-specific pre-training.
arXiv Detail & Related papers (2024-05-08T02:09:17Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - Hiding Visual Information via Obfuscating Adversarial Perturbations [47.315523613407244]
We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
arXiv Detail & Related papers (2022-09-30T08:23:26Z) - A Comprehensive Survey on the Cyber-Security of Smart Grids:
Cyber-Attacks, Detection, Countermeasure Techniques, and Future Directions [0.5735035463793008]
We provide a classification of attacks based on the Open System Interconnection (OSI) model.
We discuss in more detail the cyber-attacks that can target the different layers of smart grid networks communication.
arXiv Detail & Related papers (2022-06-22T14:55:06Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.