One Train for Two Tasks: An Encrypted Traffic Classification Framework
Using Supervised Contrastive Learning
- URL: http://arxiv.org/abs/2402.07501v1
- Date: Mon, 12 Feb 2024 09:10:09 GMT
- Title: One Train for Two Tasks: An Encrypted Traffic Classification Framework
Using Supervised Contrastive Learning
- Authors: Haozhen Zhang, Xi Xiao, Le Yu, Qing Li, Zhen Ling, Ye Zhang
- Abstract summary: We propose an effective model named a Contrastive Learning Enhanced Temporal Fusion (CLE-TFE)
In particular, we utilize supervised contrastive learning to enhance the packet-level and flow-level representations.
We also propose cross-level multi-task learning, which simultaneously accomplishes the packet-level and flow-level classification tasks in the same model with one training.
- Score: 18.63871240173137
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As network security receives widespread attention, encrypted traffic
classification has become the current research focus. However, existing methods
conduct traffic classification without sufficiently considering the common
characteristics between data samples, leading to suboptimal performance.
Moreover, they train the packet-level and flow-level classification tasks
independently, which is redundant because the packet representations learned in
the packet-level task can be exploited by the flow-level task. Therefore, in
this paper, we propose an effective model named a Contrastive Learning Enhanced
Temporal Fusion Encoder (CLE-TFE). In particular, we utilize supervised
contrastive learning to enhance the packet-level and flow-level representations
and perform graph data augmentation on the byte-level traffic graph so that the
fine-grained semantic-invariant characteristics between bytes can be captured
through contrastive learning. We also propose cross-level multi-task learning,
which simultaneously accomplishes the packet-level and flow-level
classification tasks in the same model with one training. Further experiments
show that CLE-TFE achieves the best overall performance on the two tasks, while
its computational overhead (i.e., floating point operations, FLOPs) is only
about 1/14 of the pre-trained model (e.g., ET-BERT). We release the code at
https://github.com/ViktorAxelsen/CLE-TFE
Related papers
- Investigating Self-Supervised Methods for Label-Efficient Learning [27.029542823306866]
We study different self supervised pretext tasks, namely contrastive learning, clustering, and masked image modelling for their low-shot capabilities.
We introduce a framework involving both mask image modelling and clustering as pretext tasks, which performs better across all low-shot downstream tasks.
When testing the model on full scale datasets, we show performance gains in multi-class classification, multi-label classification and semantic segmentation.
arXiv Detail & Related papers (2024-06-25T10:56:03Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - Enhancing Self-Supervised Learning for Remote Sensing with Elevation
Data: A Case Study with Scarce And High Level Semantic Labels [1.534667887016089]
This work proposes a hybrid unsupervised and supervised learning method to pre-train models applied in Earth observation downstream tasks.
We combine a contrastive approach to pre-train models with a pixel-wise regression pre-text task to predict coarse elevation maps.
arXiv Detail & Related papers (2023-04-13T23:01:11Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.