Event Causality Identification with Causal News Corpus -- Shared Task 3,
CASE 2022
- URL: http://arxiv.org/abs/2211.12154v1
- Date: Tue, 22 Nov 2022 10:34:09 GMT
- Title: Event Causality Identification with Causal News Corpus -- Shared Task 3,
CASE 2022
- Authors: Fiona Anting Tan, Hansi Hettiarachchi, Ali H\"urriyeto\u{g}lu, Tommaso
Caselli, Onur Uca, Farhana Ferdousi Liza, Nelleke Oostdijk
- Abstract summary: Event Causality Identification Shared Task of CASE 2022 involved two subtasks.
Subtask 1 required participants to predict if a sentence contains a causal relation or not.
Subtask 2 required participants to identify the Cause, Effect and Signal spans per causal sentence.
- Score: 3.0775142635531685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Event Causality Identification Shared Task of CASE 2022 involved two
subtasks working on the Causal News Corpus. Subtask 1 required participants to
predict if a sentence contains a causal relation or not. This is a supervised
binary classification task. Subtask 2 required participants to identify the
Cause, Effect and Signal spans per causal sentence. This could be seen as a
supervised sequence labeling task. For both subtasks, participants uploaded
their predictions for a held-out test set, and ranking was done based on binary
F1 and macro F1 scores for Subtask 1 and 2, respectively. This paper summarizes
the work of the 17 teams that submitted their results to our competition and 12
system description papers that were received. The best F1 scores achieved for
Subtask 1 and 2 were 86.19% and 54.15%, respectively. All the top-performing
approaches involved pre-trained language models fine-tuned to the targeted
task. We further discuss these approaches and analyze errors across
participants' systems in this paper.
Related papers
- BoschAI @ Causal News Corpus 2023: Robust Cause-Effect Span Extraction
using Multi-Layer Sequence Tagging and Data Augmentation [16.59785586761074]
Event Causality Identification with Causal News Corpus Shared Task addresses two aspects of this challenge.
Subtask 1 aims at detecting causal relationships in texts, and Subtask 2 requires identifying signal words and the spans that refer to the cause or effect.
Our system, which is based on pre-trained transformers, stacked sequence tagging, and synthetic data augmentation, ranks third in Subtask 1 and wins Subtask 2 with an F1 score of 72.8.
arXiv Detail & Related papers (2023-12-11T12:35:35Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - RuArg-2022: Argument Mining Evaluation [69.87149207721035]
This paper is a report of the organizers on the first competition of argumentation analysis systems dealing with Russian language texts.
A corpus containing 9,550 sentences (comments on social media posts) on three topics related to the COVID-19 pandemic was prepared.
The system that won the first place in both tasks used the NLI (Natural Language Inference) variant of the BERT architecture.
arXiv Detail & Related papers (2022-06-18T17:13:37Z) - IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument
Mining Tasks [59.457948080207174]
In this work, we introduce a comprehensive and large dataset named IAM, which can be applied to a series of argument mining tasks.
Near 70k sentences in the dataset are fully annotated based on their argument properties.
We propose two new integrated argument mining tasks associated with the debate preparation process: (1) claim extraction with stance classification (CESC) and (2) claim-evidence pair extraction (CEPE)
arXiv Detail & Related papers (2022-03-23T08:07:32Z) - Overview of the CLEF-2019 CheckThat!: Automatic Identification and
Verification of Claims [26.96108180116284]
CheckThat! lab featured two tasks in two different languages: English and Arabic.
The most successful approaches to Task 1 used various neural networks and logistic regression.
Learning-to-rank was used by the highest scoring runs for subtask A.
arXiv Detail & Related papers (2021-09-25T16:08:09Z) - Findings of the NLP4IF-2021 Shared Tasks on Fighting the COVID-19
Infodemic and Censorship Detection [23.280506220186425]
We present the results of the NLP4IF-2021 shared tasks.
Ten teams submitted systems for task 1, and one team participated in task 2.
The best systems used pre-trained Transformers and ensembles.
arXiv Detail & Related papers (2021-09-23T06:38:03Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - ESTER: A Machine Reading Comprehension Dataset for Event Semantic
Relation Reasoning [49.795767003586235]
We introduce ESTER, a comprehensive machine reading comprehension dataset for Event Semantic Relation Reasoning.
We study five most commonly used event semantic relations and formulate them as question answering tasks.
Experimental results show that the current SOTA systems achieve 60.5%, 57.8%, and 76.3% for event-based F1, token based F1 and HIT@1 scores respectively.
arXiv Detail & Related papers (2021-04-16T19:59:26Z) - TEST_POSITIVE at W-NUT 2020 Shared Task-3: Joint Event Multi-task
Learning for Slot Filling in Noisy Text [26.270447944466557]
We propose the Joint Event Multi-task Learning (JOELIN) model for extracting COVID-19 events from Twitter.
Through a unified global learning framework, we make use of all the training data across different events to learn and fine-tune the language model.
We implement a type-aware post-processing procedure using named entity recognition (NER) to further filter the predictions.
arXiv Detail & Related papers (2020-09-29T19:08:45Z) - SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual
Media [50.29389719723529]
We present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media.
The goal of this shared task is to design automatic methods for emphasis selection.
The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used.
arXiv Detail & Related papers (2020-08-07T17:24:53Z) - BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual
statements with deep pre-trained language representation models [6.853018135783218]
This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals.
The challenge focused on detecting whether a given statement contains a counterfactual.
We found RoBERTa LRM to perform the best in both subtasks.
arXiv Detail & Related papers (2020-07-28T11:16:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.