Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text
Classification
- URL: http://arxiv.org/abs/2107.12262v1
- Date: Mon, 26 Jul 2021 15:09:40 GMT
- Title: Meta-Learning Adversarial Domain Adaptation Network for Few-Shot Text
Classification
- Authors: ChengCheng Han, Zeqiu Fan, Dongxiang Zhang, Minghui Qiu, Ming Gao,
Aoying Zhou
- Abstract summary: We propose a novel meta-learning framework integrated with an adversarial domain adaptation network.
Our method demonstrates clear superiority over the state-of-the-art models in all the datasets.
In particular, the accuracy of 1-shot and 5-shot classification on the dataset of 20 Newsgroups is boosted from 52.1% to 59.6%.
- Score: 31.167424308211995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning has emerged as a trending technique to tackle few-shot text
classification and achieved state-of-the-art performance. However, existing
solutions heavily rely on the exploitation of lexical features and their
distributional signatures on training data, while neglecting to strengthen the
model's ability to adapt to new tasks. In this paper, we propose a novel
meta-learning framework integrated with an adversarial domain adaptation
network, aiming to improve the adaptive ability of the model and generate
high-quality text embedding for new classes. Extensive experiments are
conducted on four benchmark datasets and our method demonstrates clear
superiority over the state-of-the-art models in all the datasets. In
particular, the accuracy of 1-shot and 5-shot classification on the dataset of
20 Newsgroups is boosted from 52.1% to 59.6%, and from 68.3% to 77.8%,
respectively.
Related papers
- Ensembling Finetuned Language Models for Text Classification [55.15643209328513]
Finetuning is a common practice across different communities to adapt pretrained models to particular tasks.
ensembles of neural networks are typically used to boost performance and provide reliable uncertainty estimates.
We present a metadataset with predictions from five large finetuned models on six datasets and report results of different ensembling strategies.
arXiv Detail & Related papers (2024-10-25T09:15:54Z) - Reducing and Exploiting Data Augmentation Noise through Meta Reweighting
Contrastive Learning for Text Classification [3.9889306957591755]
We propose a novel framework to boost deep learning models' performance given augmented data/samples in text classification tasks.
We propose novel weight-dependent enqueue and dequeue algorithms to utilize augmented samples' weight/quality information effectively.
Our framework achieves an average of 1.6%, up to 4.3% absolute improvement on Text-CNN encoders and an average of 1.4%, up to 4.4% absolute improvement on RoBERTa-base encoders.
arXiv Detail & Related papers (2024-09-26T02:19:13Z) - Enhancing Image Classification in Small and Unbalanced Datasets through Synthetic Data Augmentation [0.0]
This paper introduces a novel synthetic augmentation strategy using class-specific Variational Autoencoders (VAEs) and latent space to improve discrimination capabilities.
By generating realistic, varied synthetic data that fills feature space gaps, we address issues of data scarcity and class imbalance.
The proposed strategy was tested in a small dataset of 321 images created to train and validate an automatic method for assessing the quality of cleanliness of esophagogastroduodenoscopy images.
arXiv Detail & Related papers (2024-09-16T13:47:52Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - TART: Improved Few-shot Text Classification Using Task-Adaptive
Reference Transformation [23.02986307143718]
We propose a novel Task-Adaptive Reference Transformation (TART) network to enhance the generalization.
Our model surpasses the state-of-the-art method by 7.4% and 5.4% in 1-shot and 5-shot classification on the 20 Newsgroups dataset.
arXiv Detail & Related papers (2023-06-03T18:38:02Z) - Boosting Visual-Language Models by Exploiting Hard Samples [126.35125029639168]
HELIP is a cost-effective strategy tailored to enhance the performance of existing CLIP models.
Our method allows for effortless integration with existing models' training pipelines.
On comprehensive benchmarks, HELIP consistently boosts existing models to achieve leading performance.
arXiv Detail & Related papers (2023-05-09T07:00:17Z) - Revisiting Classifier: Transferring Vision-Language Models for Video
Recognition [102.93524173258487]
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research.
In this study, we focus on transferring knowledge for video classification tasks.
We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning.
arXiv Detail & Related papers (2022-07-04T10:00:47Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Cross-Domain Few-Shot Learning with Meta Fine-Tuning [8.062394790518297]
We tackle the new Cross-Domain Few-Shot Learning benchmark proposed by the CVPR 2020 Challenge.
We build upon state-of-the-art methods in domain adaptation and few-shot learning to create a system that can be trained to perform both tasks.
arXiv Detail & Related papers (2020-05-21T09:55:26Z) - Dynamic Memory Induction Networks for Few-Shot Text Classification [84.88381813651971]
This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification.
The proposed model achieves new state-of-the-art results on the miniRCV1 and ODIC dataset, improving the best performance (accuracy) by 24%.
arXiv Detail & Related papers (2020-05-12T12:41:14Z) - Structure-Tags Improve Text Classification for Scholarly Document
Quality Prediction [4.4641025448898475]
We propose the use of HANs combined with structure-tags which mark the role of sentences in the document.
Adding tags to sentences, marking them as corresponding to title, abstract or main body text, yields improvements over the state-of-the-art for scholarly document quality prediction.
arXiv Detail & Related papers (2020-04-30T22:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.