Transferability of Representations Learned using Supervised Contrastive
Learning Trained on a Multi-Domain Dataset
- URL: http://arxiv.org/abs/2309.15486v1
- Date: Wed, 27 Sep 2023 08:34:36 GMT
- Title: Transferability of Representations Learned using Supervised Contrastive
Learning Trained on a Multi-Domain Dataset
- Authors: Alvin De Jun Tan, Clement Tan, Chai Kiat Yeo
- Abstract summary: Contrastive learning has shown to learn better quality representations than models trained using cross-entropy loss.
This paper explores the transferability of representations learned using contrastive learning when trained on a multi-domain dataset.
- Score: 5.389242063238166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning has shown to learn better quality representations than
models trained using cross-entropy loss. They also transfer better to
downstream datasets from different domains. However, little work has been done
to explore the transferability of representations learned using contrastive
learning when trained on a multi-domain dataset. In this paper, a study has
been conducted using the Supervised Contrastive Learning framework to learn
representations from the multi-domain DomainNet dataset and then evaluate the
transferability of the representations learned on other downstream datasets.
The fixed feature linear evaluation protocol will be used to evaluate the
transferability on 7 downstream datasets that were chosen across different
domains. The results obtained are compared to a baseline model that was trained
using the widely used cross-entropy loss. Empirical results from the
experiments showed that on average, the Supervised Contrastive Learning model
performed 6.05% better than the baseline model on the 7 downstream datasets.
The findings suggest that Supervised Contrastive Learning models can
potentially learn more robust representations that transfer better across
domains than cross-entropy models when trained on a multi-domain dataset.
Related papers
- On the Transferability of Learning Models for Semantic Segmentation for
Remote Sensing Data [12.500746892824338]
Recent deep learning-based methods outperform traditional learning methods on remote sensing (RS) semantic segmentation/classification tasks.
Yet, there is no comprehensive analysis of their transferability, i.e., to which extent a model trained on a source domain can be readily applicable to a target domain.
This paper investigates the raw transferability of traditional and deep learning (DL) models, as well as the effectiveness of domain adaptation (DA) approaches.
arXiv Detail & Related papers (2023-10-16T15:13:36Z) - Self-Supervised In-Domain Representation Learning for Remote Sensing
Image Scene Classification [1.0152838128195465]
Transferring the ImageNet pre-trained weights to the various remote sensing tasks has produced acceptable results.
Recent research has demonstrated that self-supervised learning methods capture visual features that are more discriminative and transferable.
We are motivated by these facts to pre-train the in-domain representations of remote sensing imagery using contrastive self-supervised learning.
arXiv Detail & Related papers (2023-02-03T15:03:07Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Intra-domain and cross-domain transfer learning for time series data --
How transferable are the features? [0.0]
This study aims to assess how transferable are the features between different domains of time series data.
The effects of transfer learning are observed in terms of predictive performance of the models and their convergence rate during training.
arXiv Detail & Related papers (2022-01-12T12:55:21Z) - How Well Do Sparse Imagenet Models Transfer? [75.98123173154605]
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" datasets.
In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset.
We show that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities.
arXiv Detail & Related papers (2021-11-26T11:58:51Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - A Broad Study on the Transferability of Visual Representations with
Contrastive Learning [15.667240680328922]
We study the transferability of learned representations of contrastive approaches for linear evaluation, full-network transfer, and few-shot recognition.
The results show that the contrastive approaches learn representations that are easily transferable to a different downstream task.
Our analysis reveals that the representations learned from the contrastive approaches contain more low/mid-level semantics than cross-entropy models.
arXiv Detail & Related papers (2021-03-24T22:55:04Z) - Transformer Based Multi-Source Domain Adaptation [53.24606510691877]
In practical machine learning settings, the data on which a model must make predictions often come from a different distribution than the data it was trained on.
Here, we investigate the problem of unsupervised multi-source domain adaptation, where a model is trained on labelled data from multiple source domains and must make predictions on a domain for which no labelled data has been seen.
We show that the predictions of large pretrained transformer based domain experts are highly homogenous, making it challenging to learn effective functions for mixing their predictions.
arXiv Detail & Related papers (2020-09-16T16:56:23Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - The Utility of Feature Reuse: Transfer Learning in Data-Starved Regimes [6.419457653976053]
We describe a transfer learning use case for a domain with a data-starved regime.
We evaluate the effectiveness of convolutional feature extraction and fine-tuning.
We conclude that transfer learning enhances the performance of CNN architectures in data-starved regimes.
arXiv Detail & Related papers (2020-02-29T18:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.