Adversarial Pre-Padding: Generating Evasive Network Traffic Against Transformer-Based Classifiers
- URL: http://arxiv.org/abs/2510.25810v1
- Date: Wed, 29 Oct 2025 09:45:27 GMT
- Title: Adversarial Pre-Padding: Generating Evasive Network Traffic Against Transformer-Based Classifiers
- Authors: Quanliang Jing, Xinxin Fan, Yanyan Liu, Jingping Bi,
- Abstract summary: We propose a novel and effective underlineadversarial underlinetraffic-generating approach.<n>Our approach has two key innovations: (i) a pre-padding strategy is proposed to modify packets, which effectively overcomes the limitations of existing research against transformer-based models for network traffic classification; and (ii) a reinforcement learning model is employed to optimize network traffic perturbations.
- Score: 7.283896930735145
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To date, traffic obfuscation techniques have been widely adopted to protect network data privacy and security by obscuring the true patterns of traffic. Nevertheless, as the pre-trained models emerge, especially transformer-based classifiers, existing traffic obfuscation methods become increasingly vulnerable, as witnessed by current studies reporting the traffic classification accuracy up to 99\% or higher. To counter such high-performance transformer-based classification models, we in this paper propose a novel and effective \underline{adv}ersarial \underline{traffic}-generating approach (AdvTraffic\footnote{The code and data are available at: http://xxx}). Our approach has two key innovations: (i) a pre-padding strategy is proposed to modify packets, which effectively overcomes the limitations of existing research against transformer-based models for network traffic classification; and (ii) a reinforcement learning model is employed to optimize network traffic perturbations, aiming to maximize adversarial effectiveness against transformer-based classification models. To the best of our knowledge, this is the first attempt to apply adversarial perturbation techniques to defend against transformer-based traffic classifiers. Furthermore, our method can be easily deployed into practical network environments. Finally, multi-faceted experiments are conducted across several real-world datasets, and the experimental results demonstrate that our proposed method can effectively undermine transformer-based classifiers, significantly reducing classification accuracy from 99\% to as low as 25.68\%.
Related papers
- TAO-Net: Two-stage Adaptive OOD Classification Network for Fine-grained Encrypted Traffic Classification [22.045881030012197]
We propose a Two-stage Adaptive OOD classification Network (TAO-Net) that achieves accurate classification for both In-Distribution (ID) and Out-of-Distribution (OOD) encrypted traffic.<n>Experiments on three datasets demonstrate that TAO-Net achieves 96.81-97.70% macro-precision and 96.77-97.68% macro-F1.
arXiv Detail & Related papers (2025-12-11T19:53:39Z) - Convolutions are Competitive with Transformers for Encrypted Traffic Classification with Pre-training [12.150588010496165]
We investigate whether convolutions, with linear complexity and implicit positional encoding, are competitive with Transformers in encrypted traffic classification with pre-training.<n>We propose NetConv, a novel pre-trained convolution model for encrypted traffic classification.<n>We show that NetConv improves average classification performance by 6.88% and model throughput by 7.41X over existing pre-trained models.
arXiv Detail & Related papers (2025-08-04T02:42:44Z) - Language of Network: A Generative Pre-trained Model for Encrypted Traffic Comprehension [16.795038178588324]
Deep learning is currently the predominant approach for encrypted traffic classification through feature analysis.<n>We present GBC, a generative model based on pre-training for encrypted traffic comprehension.<n>It achieves superior results in both traffic classification and generation tasks, resulting in a 5% improvement in F1 score compared to state-of-the-art methods for classification tasks.
arXiv Detail & Related papers (2025-05-26T04:04:29Z) - Less is More: Simplifying Network Traffic Classification Leveraging RFCs [3.8623569699070353]
We present NetMatrix, a minimalistic representation of network traffic that eliminates noisy attributes and focuses on meaningful features.<n>Compared to selected baselines, experimental evaluations demonstrate that LiM improves resource consumption by orders of magnitude.<n>This study underscores the effectiveness of simplicity in traffic representation and machine learning model selection, paving the way towards resource-efficient network traffic classification.
arXiv Detail & Related papers (2025-02-01T22:55:14Z) - MIETT: Multi-Instance Encrypted Traffic Transformer for Encrypted Traffic Classification [59.96233305733875]
Classifying traffic is essential for detecting security threats and optimizing network management.<n>We propose a Multi-Instance Encrypted Traffic Transformer (MIETT) to capture both token-level and packet-level relationships.<n>MIETT achieves results across five datasets, demonstrating its effectiveness in classifying encrypted traffic and understanding complex network behaviors.
arXiv Detail & Related papers (2024-12-19T12:52:53Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [81.93945602120453]
We introduce an approach that is both general and parameter-efficient for face forgery detection.<n>We design a forgery-style mixture formulation that augments the diversity of forgery source domains.<n>We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Towards a Transformer-Based Pre-trained Model for IoT Traffic Classification [0.6060461053918144]
State-of-the-art classification methods are based on Deep Learning.
In real-life situations, where there is a scarce amount of IoT traffic data, the models would not perform so well.
We propose IoT Traffic Classification Transformer (ITCT), which is pre-trained on a large labeled transformer-based IoT traffic dataset.
Experiments demonstrated that ITCT model significantly outperforms existing models, achieving an overall accuracy of 82%.
arXiv Detail & Related papers (2024-07-26T19:13:11Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Modelling Adversarial Noise for Adversarial Defense [96.56200586800219]
adversarial defenses typically focus on exploiting adversarial examples to remove adversarial noise or train an adversarially robust target model.
Motivated by that the relationship between adversarial data and natural data can help infer clean data from adversarial data to obtain the final correct prediction.
We study to model adversarial noise to learn the transition relationship in the label space for using adversarial labels to improve adversarial accuracy.
arXiv Detail & Related papers (2021-09-21T01:13:26Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - DoS and DDoS Mitigation Using Variational Autoencoders [15.23225419183423]
We explore the potential of Variational Autoencoders to serve as a component within an intelligent security solution.
Two methods based on the ability of Variational Autoencoders to learn latent representations from network traffic flows are proposed.
arXiv Detail & Related papers (2021-05-14T15:38:40Z) - Robustness Verification for Transformers [165.25112192811764]
We develop the first robustness verification algorithm for Transformers.
The certified robustness bounds computed by our method are significantly tighter than those by naive Interval Bound propagation.
These bounds also shed light on interpreting Transformers as they consistently reflect the importance of different words in sentiment analysis.
arXiv Detail & Related papers (2020-02-16T17:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.