Replication: Contrastive Learning and Data Augmentation in Traffic
Classification Using a Flowpic Input Representation
- URL: http://arxiv.org/abs/2309.09733v2
- Date: Sat, 14 Oct 2023 09:35:09 GMT
- Title: Replication: Contrastive Learning and Data Augmentation in Traffic
Classification Using a Flowpic Input Representation
- Authors: Alessandro Finamore, Chao Wang, Jonatan Krolikowski, Jose M. Navarro,
Fuxing Chen, Dario Rossi
- Abstract summary: We reproduce [16] on the same datasets and replicate its most salient aspect (the importance of data augmentation) on three additional public datasets.
While we confirm most of the original results, we also found a 20% accuracy drop on some of the investigated scenarios due to a data shift in the original dataset.
- Score: 47.95762911696397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last years we witnessed a renewed interest toward Traffic
Classification (TC) captivated by the rise of Deep Learning (DL). Yet, the vast
majority of TC literature lacks code artifacts, performance assessments across
datasets and reference comparisons against Machine Learning (ML) methods. Among
those works, a recent study from IMC22 [16] is worth of attention since it
adopts recent DL methodologies (namely, few-shot learning, self-supervision via
contrastive learning and data augmentation) appealing for networking as they
enable to learn from a few samples and transfer across datasets. The main
result of [16] on the UCDAVIS19, ISCX-VPN and ISCX-Tor datasets is that, with
such DL methodologies, 100 input samples are enough to achieve very high
accuracy using an input representation called "flowpic" (i.e., a per-flow 2d
histograms of the packets size evolution over time). In this paper (i) we
reproduce [16] on the same datasets and (ii) we replicate its most salient
aspect (the importance of data augmentation) on three additional public
datasets (MIRAGE19, MIRAGE22 and UTMOBILENET21). While we confirm most of the
original results, we also found a 20% accuracy drop on some of the investigated
scenarios due to a data shift in the original dataset that we uncovered.
Additionally, our study validates that the data augmentation strategies studied
in [16] perform well on other datasets too. In the spirit of reproducibility
and replicability we make all artifacts (code and data) available to the
research community at https://tcbenchstack.github.io/tcbench/
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.