BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual
statements with deep pre-trained language representation models
- URL: http://arxiv.org/abs/2007.14128v1
- Date: Tue, 28 Jul 2020 11:16:11 GMT
- Title: BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual
statements with deep pre-trained language representation models
- Authors: Martin Fajcik, Josef Jon, Martin Docekal, Pavel Smrz
- Abstract summary: This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals.
The challenge focused on detecting whether a given statement contains a counterfactual.
We found RoBERTa LRM to perform the best in both subtasks.
- Score: 6.853018135783218
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling
Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused
on detecting whether a given statement contains a counterfactual (Subtask 1)
and extracting both antecedent and consequent parts of the counterfactual from
the text (Subtask 2). We experimented with various state-of-the-art language
representation models (LRMs). We found RoBERTa LRM to perform the best in both
subtasks. We achieved the first place in both exact match and F1 for Subtask 2
and ranked second for Subtask 1.
Related papers
- SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection [68.858931667807]
Subtask A is a binary classification task determining whether a text is written by a human or generated by a machine.
Subtask B is to detect the exact source of a text, discerning whether it is written by a human or generated by a specific LLM.
Subtask C aims to identify the changing point within a text, at which the authorship transitions from human to machine.
arXiv Detail & Related papers (2024-04-22T13:56:07Z) - Findings of the WMT 2022 Shared Task on Translation Suggestion [63.457874930232926]
We report the result of the first edition of the WMT shared task on Translation Suggestion.
The task aims to provide alternatives for specific words or phrases given the entire documents generated by machine translation (MT)
It consists two sub-tasks, namely, the naive translation suggestion and translation suggestion with hints.
arXiv Detail & Related papers (2022-11-30T03:48:36Z) - Event Causality Identification with Causal News Corpus -- Shared Task 3,
CASE 2022 [3.0775142635531685]
Event Causality Identification Shared Task of CASE 2022 involved two subtasks.
Subtask 1 required participants to predict if a sentence contains a causal relation or not.
Subtask 2 required participants to identify the Cause, Effect and Signal spans per causal sentence.
arXiv Detail & Related papers (2022-11-22T10:34:09Z) - Effective Cross-Task Transfer Learning for Explainable Natural Language
Inference with T5 [50.574918785575655]
We compare sequential fine-tuning with a model for multi-task learning in the context of boosting performance on two tasks.
Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting.
arXiv Detail & Related papers (2022-10-31T13:26:08Z) - Towards End-to-End Open Conversational Machine Reading [57.18251784418258]
In open-retrieval conversational machine reading (OR-CMR) task, machines are required to do multi-turn question answering given dialogue history and a textual knowledge base.
We model OR-CMR as a unified text-to-text task in a fully end-to-end style. Experiments on the ShARC and OR-ShARC dataset show the effectiveness of our proposed end-to-end framework.
arXiv Detail & Related papers (2022-10-13T15:50:44Z) - Zero-Shot Information Extraction as a Unified Text-to-Triple Translation [56.01830747416606]
We cast a suite of information extraction tasks into a text-to-triple translation framework.
We formalize the task as a translation between task-specific input text and output triples.
We study the zero-shot performance of this framework on open information extraction.
arXiv Detail & Related papers (2021-09-23T06:54:19Z) - ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for
Counterfactual Statement Modeling [48.3669727720486]
ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.
This paper describes our system which is based on pre-trained transformers.
arXiv Detail & Related papers (2020-09-17T09:28:07Z) - UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition
Extraction [0.17188280334580194]
This work presents our contribution in the context of the 6th task of SemEval-2020: Extracting Definitions from Free Text in Textbooks.
We use various pretrained language models to solve each of the three subtasks of the competition.
Our best performing model evaluated on the DeftEval dataset obtains the 32nd place for the first subtask and the 37th place for the second subtask.
arXiv Detail & Related papers (2020-09-11T18:36:22Z) - SemEval-2020 Task 5: Counterfactual Recognition [36.38097292055921]
Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not.
Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement.
arXiv Detail & Related papers (2020-08-02T20:32:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.