Out-of-Distribution Generalization in Text Classification: Past,
Present, and Future
- URL: http://arxiv.org/abs/2305.14104v1
- Date: Tue, 23 May 2023 14:26:11 GMT
- Title: Out-of-Distribution Generalization in Text Classification: Past,
Present, and Future
- Authors: Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang,
Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang
- Abstract summary: Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data.
This poses important questions about the robustness of NLP models and their high accuracy, which may be artificially inflated due to their underlying sensitivity to systematic biases.
This paper presents the first comprehensive review of recent progress, methods, and evaluations on this topic.
- Score: 30.581612475530974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) systems in natural language processing (NLP) face
significant challenges in generalizing to out-of-distribution (OOD) data, where
the test distribution differs from the training data distribution. This poses
important questions about the robustness of NLP models and their high accuracy,
which may be artificially inflated due to their underlying sensitivity to
systematic biases. Despite these challenges, there is a lack of comprehensive
surveys on the generalization challenge from an OOD perspective in text
classification. Therefore, this paper aims to fill this gap by presenting the
first comprehensive review of recent progress, methods, and evaluations on this
topic. We furth discuss the challenges involved and potential future research
directions. By providing quick access to existing work, we hope this survey
will encourage future research in this area.
Related papers
- Deep Learning-Based Object Pose Estimation: A Comprehensive Survey [73.74933379151419]
We discuss the recent advances in deep learning-based object pose estimation.
Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks.
arXiv Detail & Related papers (2024-05-13T14:44:22Z) - How to Handle Different Types of Out-of-Distribution Scenarios in Computational Argumentation? A Comprehensive and Fine-Grained Field Study [59.13867562744973]
This work systematically assesses LMs' capabilities for out-of-distribution (OOD) scenarios.
We find that the efficacy of such learning paradigms varies with the type of OOD.
Specifically, while ICL excels for domain shifts, prompt-based fine-tuning surpasses for topic shifts.
arXiv Detail & Related papers (2023-09-15T11:15:47Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Robust Visual Question Answering: Datasets, Methods, and Future
Challenges [23.59923999144776]
Visual question answering requires a system to provide an accurate natural language answer given an image and a natural language question.
Previous generic VQA methods often exhibit a tendency to memorize biases present in the training data rather than learning proper behaviors, such as grounding images before predicting answers.
Various datasets and debiasing methods have been proposed to evaluate and enhance the VQA robustness, respectively.
arXiv Detail & Related papers (2023-07-21T10:12:09Z) - A Survey on Knowledge-Enhanced Pre-trained Language Models [8.54551743144995]
Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs)
Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks.
By integrating external knowledge into PLMs, textitunderlineKnowledge-underlineEnhanced underlinePre-trained underlineLanguage underlineModels
arXiv Detail & Related papers (2022-12-27T09:54:14Z) - A Comprehensive Review of Trends, Applications and Challenges In
Out-of-Distribution Detection [0.76146285961466]
Field of study has emerged, focusing on detecting out-of-distribution data subsets and enabling a more comprehensive generalization.
As many deep learning based models have achieved near-perfect results on benchmark datasets, the need to evaluate these models' reliability and trustworthiness is felt more strongly than ever.
This paper presents a survey that, in addition to reviewing more than 70 papers in this field, presents challenges and directions for future works and offers a unifying look into various types of data shifts and solutions for better generalization.
arXiv Detail & Related papers (2022-09-26T18:13:14Z) - Recent Few-Shot Object Detection Algorithms: A Survey with Performance
Comparison [54.357707168883024]
Few-Shot Object Detection (FSOD) mimics the humans' ability of learning to learn.
FSOD intelligently transfers the learned generic object knowledge from the common heavy-tailed, to the novel long-tailed object classes.
We give an overview of FSOD, including the problem definition, common datasets, and evaluation protocols.
arXiv Detail & Related papers (2022-03-27T04:11:28Z) - Robust Natural Language Processing: Recent Advances, Challenges, and
Future Directions [4.409836695738517]
We present a structured overview of NLP robustness research by summarizing the literature in a systemic way across various dimensions.
We then take a deep-dive into the various dimensions of robustness, across techniques, metrics, embeddings, and benchmarks.
arXiv Detail & Related papers (2022-01-03T17:17:11Z) - Deep Learning meets Liveness Detection: Recent Advancements and
Challenges [3.2011056280404637]
We present a comprehensive survey on the literature related to deep-feature-based FAS methods since 2017.
We cover predominant public datasets for FAS in chronological order, their evolutional progress, and the evaluation criteria.
arXiv Detail & Related papers (2021-12-29T19:24:58Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z) - Low-resource Languages: A Review of Past Work and Future Challenges [68.8204255655161]
A current problem in NLP is massaging and processing low-resource languages which lack useful training attributes such as supervised data, number of native speakers or experts, etc.
This review paper concisely summarizes previous groundbreaking achievements made towards resolving this problem, and analyzes potential improvements in the context of the overall future research direction.
arXiv Detail & Related papers (2020-06-12T15:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.