A Systematic Analysis on the Temporal Generalization of Language Models in Social Media
- URL: http://arxiv.org/abs/2405.13017v1
- Date: Wed, 15 May 2024 05:41:06 GMT
- Title: A Systematic Analysis on the Temporal Generalization of Language Models in Social Media
- Authors: Asahi Ushio, Jose Camacho-Collados,
- Abstract summary: This paper focuses on temporal shifts in social media and, in particular, Twitter.
We propose a unified evaluation scheme to assess the performance of language models (LMs) under temporal shift.
- Score: 12.035331011654078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In machine learning, temporal shifts occur when there are differences between training and test splits in terms of time. For streaming data such as news or social media, models are commonly trained on a fixed corpus from a certain period of time, and they can become obsolete due to the dynamism and evolving nature of online content. This paper focuses on temporal shifts in social media and, in particular, Twitter. We propose a unified evaluation scheme to assess the performance of language models (LMs) under temporal shift on standard social media tasks. LMs are tested on five diverse social media NLP tasks under different temporal settings, which revealed two important findings: (i) the decrease in performance under temporal shift is consistent across different models for entity-focused tasks such as named entity recognition or disambiguation, and hate speech detection, but not significant in the other tasks analysed (i.e., topic and sentiment classification); and (ii) continuous pre-training on the test period does not improve the temporal adaptability of LMs.
Related papers
- Adaptive Cascading Network for Continual Test-Time Adaptation [12.718826132518577]
We study the problem of continual test-time adaption where the goal is to adapt a source pre-trained model to a sequence of unlabelled target domains at test time.
Existing methods on test-time training suffer from several limitations.
arXiv Detail & Related papers (2024-07-17T01:12:57Z) - Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models [24.784375155633427]
BiTimeBERT 2.0 is a novel language model pre-trained on a temporal news article collection.
Each objective targets a unique aspect of temporal information.
Results consistently demonstrate that BiTimeBERT 2.0 outperforms models like BERT and other existing pre-trained models.
arXiv Detail & Related papers (2024-06-04T00:30:37Z) - Revisiting Dynamic Evaluation: Online Adaptation for Large Language
Models [88.47454470043552]
We consider the problem of online fine tuning the parameters of a language model at test time, also known as dynamic evaluation.
Online adaptation turns parameters into temporally changing states and provides a form of context-length extension with memory in weights.
arXiv Detail & Related papers (2024-03-03T14:03:48Z) - Subspace Chronicles: How Linguistic Information Emerges, Shifts and
Interacts during Language Model Training [56.74440457571821]
We analyze tasks covering syntax, semantics and reasoning, across 2M pre-training steps and five seeds.
We identify critical learning phases across tasks and time, during which subspaces emerge, share information, and later disentangle to specialize.
Our findings have implications for model interpretability, multi-task learning, and learning from limited data.
arXiv Detail & Related papers (2023-10-25T09:09:55Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Time Will Change Things: An Empirical Study on Dynamic Language
Understanding in Social Media Classification [5.075802830306718]
We empirically study social media NLU in a dynamic setup, where models are trained on the past data and test on the future.
We show that auto-encoding and pseudo-labeling collaboratively show the best robustness in dynamicity.
arXiv Detail & Related papers (2022-10-06T12:18:28Z) - Temporal Effects on Pre-trained Models for Language Processing Tasks [9.819970078135343]
We present a set of experiments with systems powered by large neural pretrained representations for English to demonstrate that em temporal model deterioration is not as big a concern.
It is however the case that em temporal domain adaptation is beneficial, with better performance for a given time period possible when the system is trained on temporally more recent data.
arXiv Detail & Related papers (2021-11-24T20:44:12Z) - Time Waits for No One! Analysis and Challenges of Temporal Misalignment [42.106972477571226]
We establish a suite of eight diverse tasks across different domains to quantify the effects of temporal misalignment.
We find stronger effects of temporal misalignment on task performance than have been previously reported.
Our findings motivate continued research to improve temporal robustness of NLP models.
arXiv Detail & Related papers (2021-11-14T18:29:19Z) - Combating Temporal Drift in Crisis with Adapted Embeddings [58.4558720264897]
Language usage changes over time, and this can impact the effectiveness of NLP systems.
This work investigates methods for adapting to changing discourse during crisis events.
arXiv Detail & Related papers (2021-04-17T13:11:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.