Characterizing and Measuring Linguistic Dataset Drift
- URL: http://arxiv.org/abs/2305.17127v1
- Date: Fri, 26 May 2023 17:50:51 GMT
- Title: Characterizing and Measuring Linguistic Dataset Drift
- Authors: Tyler A. Chang, Kishaloy Halder, Neha Anna John, Yogarshi Vyas,
Yassine Benajiba, Miguel Ballesteros, Dan Roth
- Abstract summary: We propose three dimensions of linguistic dataset drift: vocabulary, structural, and semantic drift.
These dimensions correspond to content word frequency divergences, syntactic divergences, and meaning changes not captured by word frequencies.
We find that our drift metrics are more effective than previous metrics at predicting out-of-domain model accuracies.
- Score: 65.28821163863665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NLP models often degrade in performance when real world data distributions
differ markedly from training data. However, existing dataset drift metrics in
NLP have generally not considered specific dimensions of linguistic drift that
affect model performance, and they have not been validated in their ability to
predict model performance at the individual example level, where such metrics
are often used in practice. In this paper, we propose three dimensions of
linguistic dataset drift: vocabulary, structural, and semantic drift. These
dimensions correspond to content word frequency divergences, syntactic
divergences, and meaning changes not captured by word frequencies (e.g. lexical
semantic change). We propose interpretable metrics for all three drift
dimensions, and we modify past performance prediction methods to predict model
performance at both the example and dataset level for English sentiment
classification and natural language inference. We find that our drift metrics
are more effective than previous metrics at predicting out-of-domain model
accuracies (mean 16.8% root mean square error decrease), particularly when
compared to popular fine-tuned embedding distances (mean 47.7% error decrease).
Fine-tuned embedding distances are much more effective at ranking individual
examples by expected performance, but decomposing into vocabulary, structural,
and semantic drift produces the best example rankings of all considered
model-agnostic drift metrics (mean 6.7% ROC AUC increase).
Related papers
- What is the Right Notion of Distance between Predict-then-Optimize Tasks? [35.842182348661076]
We show that traditional dataset distances, which rely solely on feature and label dimensions, lack informativeness in the Predict-then-then (PtO) context.
We propose a new dataset distance that incorporates the impacts of downstream decisions.
Our results show that this decision-aware dataset distance effectively captures adaptation success in PtO contexts.
arXiv Detail & Related papers (2024-09-11T04:13:17Z) - Word Matters: What Influences Domain Adaptation in Summarization? [43.7010491942323]
This paper investigates the fine-grained factors affecting domain adaptation performance.
We propose quantifying dataset learning difficulty as the learning difficulty of generative summarization.
Our experiments conclude that, when considering dataset learning difficulty, the cross-domain overlap and the performance gain in summarization tasks exhibit an approximate linear relationship.
arXiv Detail & Related papers (2024-06-21T02:15:49Z) - Benchmark Transparency: Measuring the Impact of Data on Evaluation [6.307485015636125]
We propose an automated framework that measures the data point distribution across 6 different dimensions.
We use disproportional stratified sampling to measure how much the data distribution affects absolute (Acc/F1) and relative (Rank) model performance.
We find that the impact of the data is statistically significant and is often larger than the impact of changing the metric.
arXiv Detail & Related papers (2024-03-31T17:33:43Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - Does Manipulating Tokenization Aid Cross-Lingual Transfer? A Study on
POS Tagging for Non-Standardized Languages [18.210880703295253]
We finetune pretrained language models (PLMs) on seven languages from three different families.
We analyze their zero-shot performance on closely related, non-standardized varieties.
Overall, we find that the similarity between the percentage of words that get split into subwords in the source and target data is the strongest predictor for model performance on target data.
arXiv Detail & Related papers (2023-04-20T08:32:34Z) - Retrieval-based Disentangled Representation Learning with Natural
Language Supervision [61.75109410513864]
We present Vocabulary Disentangled Retrieval (VDR), a retrieval-based framework that harnesses natural language as proxies of the underlying data variation to drive disentangled representation learning.
Our approach employ a bi-encoder model to represent both data and natural language in a vocabulary space, enabling the model to distinguish intrinsic dimensions that capture characteristics within data through its natural language counterpart, thus disentanglement.
arXiv Detail & Related papers (2022-12-15T10:20:42Z) - Impact of Pretraining Term Frequencies on Few-Shot Reasoning [51.990349528930125]
We investigate how well pretrained language models reason with terms that are less frequent in the pretraining data.
We measure the strength of this correlation for a number of GPT-based language models on various numerical deduction tasks.
Although LMs exhibit strong performance at few-shot numerical reasoning tasks, our results raise the question of how much models actually generalize beyond pretraining data.
arXiv Detail & Related papers (2022-02-15T05:43:54Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Process for Adapting Language Models to Society (PALMS) with
Values-Targeted Datasets [0.0]
Language models can generate harmful and biased outputs and exhibit undesirable behavior.
We propose a Process for Adapting Language Models to Society (PALMS) with Values-Targeted datasets.
We show that significantly adjusting language model behavior is feasible with a small, hand-curated dataset.
arXiv Detail & Related papers (2021-06-18T19:38:28Z) - Parameter Space Factorization for Zero-Shot Learning across Tasks and
Languages [112.65994041398481]
We propose a Bayesian generative model for the space of neural parameters.
We infer the posteriors over such latent variables based on data from seen task-language combinations.
Our model yields comparable or better results than state-of-the-art, zero-shot cross-lingual transfer methods.
arXiv Detail & Related papers (2020-01-30T16:58:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.