Time Will Change Things: An Empirical Study on Dynamic Language
Understanding in Social Media Classification
- URL: http://arxiv.org/abs/2210.02857v1
- Date: Thu, 6 Oct 2022 12:18:28 GMT
- Title: Time Will Change Things: An Empirical Study on Dynamic Language
Understanding in Social Media Classification
- Authors: Yuji Zhang, Jing Li
- Abstract summary: We empirically study social media NLU in a dynamic setup, where models are trained on the past data and test on the future.
We show that auto-encoding and pseudo-labeling collaboratively show the best robustness in dynamicity.
- Score: 5.075802830306718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Language features are ever-evolving in the real-world social media
environment. Many trained models in natural language understanding (NLU),
ineffective in semantic inference for unseen features, might consequently
struggle with the deteriorating performance in dynamicity. To address this
challenge, we empirically study social media NLU in a dynamic setup, where
models are trained on the past data and test on the future. It better reflects
the realistic practice compared to the commonly-adopted static setup of random
data split. To further analyze model adaption to the dynamicity, we explore the
usefulness of leveraging some unlabeled data created after a model is trained.
The performance of unsupervised domain adaption baselines based on
auto-encoding and pseudo-labeling and a joint framework coupling them both are
examined in the experiments. Substantial results on four social media tasks
imply the universally negative effects of evolving environments over
classification accuracy, while auto-encoding and pseudo-labeling
collaboratively show the best robustness in dynamicity.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics [49.9329723199239]
We propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples.
We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics.
When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset.
arXiv Detail & Related papers (2024-10-04T13:39:21Z) - Evaluating the Effectiveness of Video Anomaly Detection in the Wild: Online Learning and Inference for Real-world Deployment [2.1374208474242815]
Video Anomaly Detection (VAD) identifies unusual activities in video streams, a key technology with broad applications ranging from surveillance to healthcare.
Tackling VAD in real-life settings poses significant challenges due to the dynamic nature of human actions, environmental variations, and domain shifts.
Online learning is a potential strategy to mitigate this issue by allowing models to adapt to new information continuously.
arXiv Detail & Related papers (2024-04-29T14:47:32Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - LEADS: Learning Dynamical Systems that Generalize Across Environments [12.024388048406587]
We propose LEADS, a novel framework that leverages the commonalities and discrepancies among known environments to improve model generalization.
We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments.
arXiv Detail & Related papers (2021-06-08T17:28:19Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Learning Reactive and Predictive Differentiable Controllers for
Switching Linear Dynamical Models [7.653542219337937]
We present a framework for learning composite dynamical behaviors from expert demonstrations.
We learn a switching linear dynamical model with contacts encoded in switching conditions as a close approximation of our system dynamics.
We then use discrete-time LQR as the differentiable policy class for data-efficient learning of control to develop a control strategy.
arXiv Detail & Related papers (2021-03-26T04:40:24Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Online Learning With Adaptive Rebalancing in Nonstationary Environments [11.501721946030779]
We provide new insights into learning from nonstationary and imbalanced data in online learning.
We propose the novel Adaptive REBAlancing (AREBA) algorithm that selectively includes in the training set a subset of the majority and minority examples.
arXiv Detail & Related papers (2020-09-24T20:40:04Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.