Universal Domain Adaptation for Robust Handling of Distributional Shifts
in NLP
- URL: http://arxiv.org/abs/2310.14849v1
- Date: Mon, 23 Oct 2023 12:15:25 GMT
- Title: Universal Domain Adaptation for Robust Handling of Distributional Shifts
in NLP
- Authors: Hyuhng Joon Kim, Hyunsoo Cho, Sang-Woo Lee, Junyeob Kim, Choonghyun
Park, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
- Abstract summary: Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision.
We propose a benchmark for natural language that offers thorough viewpoints of the model's generalizability and robustness.
- Score: 25.4952909342458
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When deploying machine learning systems to the wild, it is highly desirable
for them to effectively leverage prior knowledge to the unfamiliar domain while
also firing alarms to anomalous inputs. In order to address these requirements,
Universal Domain Adaptation (UniDA) has emerged as a novel research area in
computer vision, focusing on achieving both adaptation ability and robustness
(i.e., the ability to detect out-of-distribution samples). While UniDA has led
significant progress in computer vision, its application on language input
still needs to be explored despite its feasibility. In this paper, we propose a
comprehensive benchmark for natural language that offers thorough viewpoints of
the model's generalizability and robustness. Our benchmark encompasses multiple
datasets with varying difficulty levels and characteristics, including temporal
shifts and diverse domains. On top of our testbed, we validate existing UniDA
methods from computer vision and state-of-the-art domain adaptation techniques
from NLP literature, yielding valuable findings: We observe that UniDA methods
originally designed for image input can be effectively transferred to the
natural language domain while also underscoring the effect of adaptation
difficulty in determining the model's performance.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - AD-Aligning: Emulating Human-like Generalization for Cognitive Domain Adaptation in Deep Learning [3.3543468626874486]
Domain adaptation is pivotal for enabling deep learning models to generalize across diverse domains.
We introduce AD-Aligning, a novel approach that combines adversarial training with source-target domain alignment.
Our findings highlight AD-Aligning's ability to emulate the nuanced cognitive processes inherent in human perception.
arXiv Detail & Related papers (2024-05-15T02:34:06Z) - DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series [25.434379659643707]
In time series anomaly detection, the scarcity of labeled data poses a challenge to the development of accurate models.
We propose a novel Domain Contrastive learning model for Anomaly Detection in time series (DACAD)
Our model employs supervised contrastive loss for the source domain and self-supervised contrastive triplet loss for the target domain.
arXiv Detail & Related papers (2024-04-17T11:20:14Z) - Towards Subject Agnostic Affective Emotion Recognition [8.142798657174332]
EEG signals manifest subject instability in subject-agnostic affective Brain-computer interfaces (aBCIs)
We propose a novel framework, meta-learning based augmented domain adaptation for subject-agnostic aBCIs.
Our proposed approach is shown to be effective in experiments on a public aBICs dataset.
arXiv Detail & Related papers (2023-10-20T23:44:34Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - A Comprehensive Survey on Source-free Domain Adaptation [69.17622123344327]
The research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years.
We provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme.
We compare the results of more than 30 representative SFDA methods on three popular classification benchmarks.
arXiv Detail & Related papers (2023-02-23T06:32:09Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Domain Generalization for Activity Recognition via Adaptive Feature
Fusion [9.458837222079612]
We propose emphAdaptive Feature Fusion for Activity Recognition(AFFAR).
AFFAR learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.
We apply AFAR to a real application, i.e., the diagnosis of Children's Attention Deficit Hyperactivity Disorder(ADHD)
arXiv Detail & Related papers (2022-07-21T02:14:09Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.