Replication: Contrastive Learning and Data Augmentation in Traffic
Classification Using a Flowpic Input Representation
- URL: http://arxiv.org/abs/2309.09733v2
- Date: Sat, 14 Oct 2023 09:35:09 GMT
- Title: Replication: Contrastive Learning and Data Augmentation in Traffic
Classification Using a Flowpic Input Representation
- Authors: Alessandro Finamore, Chao Wang, Jonatan Krolikowski, Jose M. Navarro,
Fuxing Chen, Dario Rossi
- Abstract summary: We reproduce [16] on the same datasets and replicate its most salient aspect (the importance of data augmentation) on three additional public datasets.
While we confirm most of the original results, we also found a 20% accuracy drop on some of the investigated scenarios due to a data shift in the original dataset.
- Score: 47.95762911696397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last years we witnessed a renewed interest toward Traffic
Classification (TC) captivated by the rise of Deep Learning (DL). Yet, the vast
majority of TC literature lacks code artifacts, performance assessments across
datasets and reference comparisons against Machine Learning (ML) methods. Among
those works, a recent study from IMC22 [16] is worth of attention since it
adopts recent DL methodologies (namely, few-shot learning, self-supervision via
contrastive learning and data augmentation) appealing for networking as they
enable to learn from a few samples and transfer across datasets. The main
result of [16] on the UCDAVIS19, ISCX-VPN and ISCX-Tor datasets is that, with
such DL methodologies, 100 input samples are enough to achieve very high
accuracy using an input representation called "flowpic" (i.e., a per-flow 2d
histograms of the packets size evolution over time). In this paper (i) we
reproduce [16] on the same datasets and (ii) we replicate its most salient
aspect (the importance of data augmentation) on three additional public
datasets (MIRAGE19, MIRAGE22 and UTMOBILENET21). While we confirm most of the
original results, we also found a 20% accuracy drop on some of the investigated
scenarios due to a data shift in the original dataset that we uncovered.
Additionally, our study validates that the data augmentation strategies studied
in [16] perform well on other datasets too. In the spirit of reproducibility
and replicability we make all artifacts (code and data) available to the
research community at https://tcbenchstack.github.io/tcbench/
Related papers
- [Re] Network Deconvolution [3.2149341556907256]
"Network deconvolution" is used to remove pixel-wise and channel-wise correlations before data is fed into each layer.
We successfully reproduce the results reported in Tables 1 and 2 of the original paper.
arXiv Detail & Related papers (2024-10-02T02:48:13Z) - DiffusionEngine: Diffusion Model is Scalable Data Engine for Object
Detection [41.436817746749384]
Diffusion Model is a scalable data engine for object detection.
DiffusionEngine (DE) provides high-quality detection-oriented training pairs in a single stage.
arXiv Detail & Related papers (2023-09-07T17:55:01Z) - Scaling Data Generation in Vision-and-Language Navigation [116.95534559103788]
We propose an effective paradigm for generating large-scale data for learning.
We apply 1200+ photo-realistic environments from HM3D and Gibson datasets and synthesizes 4.9 million instruction trajectory pairs.
Thanks to our large-scale dataset, the performance of an existing agent can be pushed up (+11% absolute with regard to previous SoTA) to a significantly new best of 80% single-run success rate on the R2R test split by simple imitation learning.
arXiv Detail & Related papers (2023-07-28T16:03:28Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - DataComp: In search of the next generation of multimodal datasets [179.79323076587255]
DataComp is a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl.
Our benchmark consists of multiple compute scales spanning four orders of magnitude.
In particular, our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet.
arXiv Detail & Related papers (2023-04-27T11:37:18Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - A Close Look at Deep Learning with Small Data [0.0]
We show that model complexity is a critical factor when only a few samples per class are available.
We also show that even standard data augmentation can boost recognition performance by large margins.
arXiv Detail & Related papers (2020-03-28T17:11:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.