Unsupervised and Interpretable Domain Adaptation to Rapidly Filter
Tweets for Emergency Services
- URL: http://arxiv.org/abs/2003.04991v2
- Date: Tue, 20 Oct 2020 18:01:19 GMT
- Title: Unsupervised and Interpretable Domain Adaptation to Rapidly Filter
Tweets for Emergency Services
- Authors: Jitin Krishnan, Hemant Purohit and Huzefa Rangwala
- Abstract summary: We present a novel method to classify relevant tweets during an ongoing crisis using the publicly available dataset of TREC incident streams.
We use dedicated attention layers for each task to provide model interpretability; critical for real-word applications.
We show a practical implication of our work by providing a use-case for the COVID-19 pandemic.
- Score: 18.57009530004948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: During the onset of a disaster event, filtering relevant information from the
social web data is challenging due to its sparse availability and practical
limitations in labeling datasets of an ongoing crisis. In this paper, we
hypothesize that unsupervised domain adaptation through multi-task learning can
be a useful framework to leverage data from past crisis events for training
efficient information filtering models during the sudden onset of a new crisis.
We present a novel method to classify relevant tweets during an ongoing crisis
without seeing any new examples, using the publicly available dataset of TREC
incident streams. Specifically, we construct a customized multi-task
architecture with a multi-domain discriminator for crisis analytics: multi-task
domain adversarial attention network. This model consists of dedicated
attention layers for each task to provide model interpretability; critical for
real-word applications. As deep networks struggle with sparse datasets, we show
that this can be improved by sharing a base layer for multi-task learning and
domain adversarial training. Evaluation of domain adaptation for crisis events
is performed by choosing a target event as the test set and training on the
rest. Our results show that the multi-task model outperformed its single task
counterpart. For the qualitative evaluation of interpretability, we show that
the attention layer can be used as a guide to explain the model predictions and
empower emergency services for exploring accountability of the model, by
showcasing the words in a tweet that are deemed important in the classification
process. Finally, we show a practical implication of our work by providing a
use-case for the COVID-19 pandemic.
Related papers
- An Information Criterion for Controlled Disentanglement of Multimodal Data [39.601584166020274]
Multimodal representation learning seeks to relate and decompose information inherent in multiple modalities.
Disentangled Self-Supervised Learning (DisentangledSSL) is a novel self-supervised approach for learning disentangled representations.
arXiv Detail & Related papers (2024-10-31T14:57:31Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Negotiated Representations to Prevent Forgetting in Machine Learning
Applications [0.0]
Catastrophic forgetting is a significant challenge in the field of machine learning.
We propose a novel method for preventing catastrophic forgetting in machine learning applications.
arXiv Detail & Related papers (2023-11-30T22:43:50Z) - Enhancing Crisis-Related Tweet Classification with Entity-Masked
Language Modeling and Multi-Task Learning [0.30458514384586394]
We propose a combination of entity-masked language modeling and hierarchical multi-label classification as a multi-task learning problem.
We evaluate our method on tweets from the TREC-IS dataset and show an absolute performance gain w.r.t. F1-score of up to 10% for actionable information types.
arXiv Detail & Related papers (2022-11-21T13:54:10Z) - Generalization with Lossy Affordances: Leveraging Broad Offline Data for
Learning Visuomotor Tasks [65.23947618404046]
We introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data.
When faced with a novel task goal, the framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems.
We show that our framework can be pre-trained on large-scale datasets of robot experiences from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.
arXiv Detail & Related papers (2022-10-12T21:46:38Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Few-Shot Unsupervised Continual Learning through Meta-Examples [21.954394608030388]
We introduce a novel and complex setting involving unsupervised meta-continual learning with unbalanced tasks.
We exploit a meta-learning scheme that simultaneously alleviates catastrophic forgetting and favors the generalization to new tasks.
Experimental results on few-shot learning benchmarks show competitive performance even compared to the supervised case.
arXiv Detail & Related papers (2020-09-17T07:02:07Z) - Multimodal Categorization of Crisis Events in Social Media [81.07061295887172]
We present a new multimodal fusion method that leverages both images and texts as input.
In particular, we introduce a cross-attention module that can filter uninformative and misleading components from weak modalities.
We show that our method outperforms the unimodal approaches and strong multimodal baselines by a large margin on three crisis-related tasks.
arXiv Detail & Related papers (2020-04-10T06:31:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.