TEST_POSITIVE at W-NUT 2020 Shared Task-3: Joint Event Multi-task
Learning for Slot Filling in Noisy Text
- URL: http://arxiv.org/abs/2009.14262v1
- Date: Tue, 29 Sep 2020 19:08:45 GMT
- Title: TEST_POSITIVE at W-NUT 2020 Shared Task-3: Joint Event Multi-task
Learning for Slot Filling in Noisy Text
- Authors: Chacha Chen, Chieh-Yang Huang, Yaqi Hou, Yang Shi, Enyan Dai, Jiaqi
Wang
- Abstract summary: We propose the Joint Event Multi-task Learning (JOELIN) model for extracting COVID-19 events from Twitter.
Through a unified global learning framework, we make use of all the training data across different events to learn and fine-tune the language model.
We implement a type-aware post-processing procedure using named entity recognition (NER) to further filter the predictions.
- Score: 26.270447944466557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The competition of extracting COVID-19 events from Twitter is to develop
systems that can automatically extract related events from tweets. The built
system should identify different pre-defined slots for each event, in order to
answer important questions (e.g., Who is tested positive? What is the age of
the person? Where is he/she?). To tackle these challenges, we propose the Joint
Event Multi-task Learning (JOELIN) model. Through a unified global learning
framework, we make use of all the training data across different events to
learn and fine-tune the language model. Moreover, we implement a type-aware
post-processing procedure using named entity recognition (NER) to further
filter the predictions. JOELIN outperforms the BERT baseline by 17.2% in micro
F1.
Related papers
- GenEARL: A Training-Free Generative Framework for Multimodal Event Argument Role Labeling [89.07386210297373]
GenEARL is a training-free generative framework that harnesses the power of modern generative models to understand event task descriptions.
We show that GenEARL outperforms the contrastive pretraining (CLIP) baseline by 9.4% and 14.2% accuracy for zero-shot EARL on the M2E2 and SwiG datasets.
arXiv Detail & Related papers (2024-04-07T00:28:13Z) - Towards Event Extraction from Speech with Contextual Clues [61.164413398231254]
We introduce the Speech Event Extraction (SpeechEE) task and construct three synthetic training sets and one human-spoken test set.
Compared to event extraction from text, SpeechEE poses greater challenges mainly due to complex speech signals that are continuous and have no word boundaries.
Our method brings significant improvements on all datasets, achieving a maximum F1 gain of 10.7%.
arXiv Detail & Related papers (2024-01-27T11:07:19Z) - Unified Demonstration Retriever for In-Context Learning [56.06473069923567]
Unified Demonstration Retriever (textbfUDR) is a single model to retrieve demonstrations for a wide range of tasks.
We propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates.
Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines.
arXiv Detail & Related papers (2023-05-07T16:07:11Z) - MarsEclipse at SemEval-2023 Task 3: Multi-Lingual and Multi-Label
Framing Detection with Contrastive Learning [21.616089539381996]
This paper describes our system for SemEval-2023 Task 3 Subtask 2 on Framing Detection.
We used a multi-label contrastive loss for fine-tuning large pre-trained language models in a multi-lingual setting.
Our system was ranked first on the official test set and on the official shared task leaderboard for five of the six languages.
arXiv Detail & Related papers (2023-04-20T18:42:23Z) - Segment-level Metric Learning for Few-shot Bioacoustic Event Detection [56.59107110017436]
We propose a segment-level few-shot learning framework that utilizes both the positive and negative events during model optimization.
Our system achieves an F-measure of 62.73 on the DCASE 2022 challenge task 5 (DCASE2022-T5) validation set, outperforming the performance of the baseline prototypical network 34.02 by a large margin.
arXiv Detail & Related papers (2022-07-15T22:41:30Z) - Handshakes AI Research at CASE 2021 Task 1: Exploring different
approaches for multilingual tasks [0.22940141855172036]
The aim of the CASE 2021 Shared Task 1 was to detect and classify socio-political and crisis event information in a multilingual setting.
Our submission contained entries in all of the subtasks, and the scores obtained validated our research finding.
arXiv Detail & Related papers (2021-10-29T07:58:49Z) - Generative Conversational Networks [67.13144697969501]
We propose a framework called Generative Conversational Networks, in which conversational agents learn to generate their own labelled training data.
We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data.
arXiv Detail & Related papers (2021-06-15T23:19:37Z) - Leveraging Event Specific and Chunk Span features to Extract COVID
Events from tweets [0.0]
We describe our system entry for WNUT 2020 Shared Task-3.
The task was aimed at automating the extraction of a variety of COVID-19 related events from Twitter.
The system ranks 1st at the leader-board with F1 of 0.6598, without using any ensembles or additional datasets.
arXiv Detail & Related papers (2020-12-18T04:49:32Z) - "What Are You Trying to Do?" Semantic Typing of Event Processes [94.3499255880101]
This paper studies a new cognitively motivated semantic typing task, multi-axis event process typing.
We develop a large dataset containing over 60k event processes, featuring ultra fine-grained typing on both the action and object type axes.
We propose a hybrid learning framework, P2GT, which addresses the challenging typing problem with indirect supervision from glosses1and a joint learning-to-rank framework.
arXiv Detail & Related papers (2020-10-13T22:37:29Z) - Phonemer at WNUT-2020 Task 2: Sequence Classification Using COVID
Twitter BERT and Bagging Ensemble Technique based on Plurality Voting [0.0]
We develop a system that automatically identifies whether an English Tweet related to the novel coronavirus (COVID-19) is informative or not.
Our final approach achieved an F1-score of 0.9037 and we were ranked sixth overall with F1-score as the evaluation criteria.
arXiv Detail & Related papers (2020-10-01T10:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.