Detecting Bias in Transfer Learning Approaches for Text Classification
- URL: http://arxiv.org/abs/2102.02114v1
- Date: Wed, 3 Feb 2021 15:48:21 GMT
- Title: Detecting Bias in Transfer Learning Approaches for Text Classification
- Authors: Irene Li
- Abstract summary: In a supervised learning setting, labels are always needed for the classification task.
In this work, we evaluate some existing transfer learning approaches on detecting the bias of imbalanced classes.
- Score: 3.968023038444605
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Classification is an essential and fundamental task in machine learning,
playing a cardinal role in the field of natural language processing (NLP) and
computer vision (CV). In a supervised learning setting, labels are always
needed for the classification task. Especially for deep neural models, a large
amount of high-quality labeled data are required for training. However, when a
new domain comes out, it is usually hard or expensive to acquire the labels.
Transfer learning could be an option to transfer the knowledge from a source
domain to a target domain. A challenge is that these two domains can be
different, either on the feature distribution, or the class distribution for
the nature of the samples. In this work, we evaluate some existing transfer
learning approaches on detecting the bias of imbalanced classes including
traditional and deep models. Besides, we propose an approach to bridge the gap
of the domain class imbalance issue.
Related papers
- CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic
Segmentation [31.50802009879241]
Domain adaptive semantic segmentation aims to learn a model with the supervision of source domain data, and produce dense predictions on unlabeled target domain.
One popular solution to this challenging task is self-training, which selects high-scoring predictions on target samples as pseudo labels for training.
We propose to directly explore the intrinsic pixel distributions of target domain data, instead of heavily relying on the source domain.
arXiv Detail & Related papers (2022-03-18T04:56:20Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Few-Shot Classification in Unseen Domains by Episodic Meta-Learning
Across Visual Domains [36.98387822136687]
Few-shot classification aims to carry out classification given only few labeled examples for the categories of interest.
In this paper, we present a unique learning framework for domain-generalized few-shot classification.
By advancing meta-learning strategies, our learning framework exploits data across multiple source domains to capture domain-invariant features.
arXiv Detail & Related papers (2021-12-27T06:54:11Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Unsupervised Cross-Domain Prerequisite Chain Learning using Variational
Graph Autoencoders [2.735701323590668]
We propose unsupervised cross-domain concept prerequisite chain learning using an optimized variational graph autoencoder.
Our model learns to transfer concept prerequisite relations from an information-rich domain to an information-poor domain.
Also, we expand an existing dataset by introducing two new domains: CV and Bioinformatics.
arXiv Detail & Related papers (2021-05-07T21:02:41Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Unsupervised Transfer Learning with Self-Supervised Remedy [60.315835711438936]
Generalising deep networks to novel domains without manual labels is challenging to deep learning.
Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains.
In this work, we aim to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains.
arXiv Detail & Related papers (2020-06-08T16:42:17Z) - Few-Shot Learning as Domain Adaptation: Algorithm and Analysis [120.75020271706978]
Few-shot learning uses prior knowledge learned from the seen classes to recognize the unseen classes.
This class-difference-caused distribution shift can be considered as a special case of domain shift.
We propose a prototypical domain adaptation network with attention (DAPNA) to explicitly tackle such a domain shift problem in a meta-learning framework.
arXiv Detail & Related papers (2020-02-06T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.