Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification
- URL: http://arxiv.org/abs/2305.12432v2
- Date: Sat, 3 Jun 2023 07:56:53 GMT
- Title: Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification
- Authors: Idio Guarino, Chao Wang, Alessandro Finamore, Antonio Pescape, Dario
Rossi
- Abstract summary: We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
- Score: 68.19713459228369
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The popularity of Deep Learning (DL), coupled with network traffic visibility
reduction due to the increased adoption of HTTPS, QUIC and DNS-SEC, re-ignited
interest towards Traffic Classification (TC). However, to tame the dependency
from task-specific large labeled datasets we need to find better ways to learn
representations that are valid across tasks. In this work we investigate this
problem comparing transfer learning, meta-learning and contrastive learning
against reference Machine Learning (ML) tree-based and monolithic DL models (16
methods total). Using two publicly available datasets, namely MIRAGE19 (40
classes) and AppClassNet (500 classes), we show that (i) using large datasets
we can obtain more general representations, (ii) contrastive learning is the
best methodology and (iii) meta-learning the worst one, and (iv) while ML
tree-based cannot handle large tasks but fits well small tasks, by means of
reusing learned representations, DL methods are reaching tree-based models
performance also for small tasks.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Generic Multi-modal Representation Learning for Network Traffic Analysis [6.372999570085887]
Network traffic analysis is fundamental for network management, troubleshooting, and security.
We propose a flexible Multi-modal Autoencoder (MAE) pipeline that can solve different use cases.
We argue that the MAE architecture is generic and can be used to learn representations useful in multiple scenarios.
arXiv Detail & Related papers (2024-05-04T12:24:29Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding [9.112203072394648]
Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow.
Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples.
arXiv Detail & Related papers (2023-12-08T19:26:13Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - Fast Yet Effective Machine Unlearning [6.884272840652062]
We introduce a novel machine unlearning framework with error-maximizing noise generation and impair-repair based weight manipulation.
We show excellent unlearning while substantially retaining the overall model accuracy.
This work is an important step towards fast and easy implementation of unlearning in deep networks.
arXiv Detail & Related papers (2021-11-17T07:29:24Z) - Few-Shot Learning for Image Classification of Common Flora [0.0]
We will showcase our results from testing various state-of-the-art transfer learning weights and architectures versus similar state-of-the-art works in the meta-learning field for image classification utilizing Model-Agnostic Meta Learning (MAML)
Our results show that both practices provide adequate performance when the dataset is sufficiently large, but that they both also struggle when data sparsity is introduced to maintain sufficient performance.
arXiv Detail & Related papers (2021-05-07T03:54:51Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.