SemEval-2020 Task 5: Counterfactual Recognition
- URL: http://arxiv.org/abs/2008.00563v1
- Date: Sun, 2 Aug 2020 20:32:19 GMT
- Title: SemEval-2020 Task 5: Counterfactual Recognition
- Authors: Xiaoyu Yang, Stephen Obadinma, Huasha Zhao, Qiong Zhang, Stan Matwin,
Xiaodan Zhu
- Abstract summary: Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not.
Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement.
- Score: 36.38097292055921
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a counterfactual recognition (CR) task, the shared Task 5 of
SemEval-2020. Counterfactuals describe potential outcomes (consequents)
produced by actions or circumstances that did not happen or cannot happen and
are counter to the facts (antecedent). Counterfactual thinking is an important
characteristic of the human cognitive system; it connects antecedents and
consequents with causal relations. Our task provides a benchmark for
counterfactual recognition in natural language with two subtasks. Subtask-1
aims to determine whether a given sentence is a counterfactual statement or
not. Subtask-2 requires the participating systems to extract the antecedent and
consequent in a given counterfactual statement. During the SemEval-2020
official evaluation period, we received 27 submissions to Subtask-1 and 11 to
Subtask-2. The data, baseline code, and leaderboard can be found at
https://competitions.codalab.org/competitions/21691. The data and baseline code
are also available at https://zenodo.org/record/3932442.
Related papers
- Effective Cross-Task Transfer Learning for Explainable Natural Language
Inference with T5 [50.574918785575655]
We compare sequential fine-tuning with a model for multi-task learning in the context of boosting performance on two tasks.
Our results show that while sequential multi-task learning can be tuned to be good at the first of two target tasks, it performs less well on the second and additionally struggles with overfitting.
arXiv Detail & Related papers (2022-10-31T13:26:08Z) - SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning [47.49596196559958]
This paper introduces the SemEval-2021 shared task 4: Reading of Abstract Meaning (ReCAM)
Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts.
Subtask 1 aims to evaluate how well a system can model concepts that cannot be directly perceived in the physical world.
Subtask 2 focuses on models' ability in comprehending nonspecific concepts located high in a hypernym hierarchy.
Subtask 3 aims to provide some insights into models' generalizability over the two types of abstractness.
arXiv Detail & Related papers (2021-05-31T11:04:17Z) - ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for
Counterfactual Statement Modeling [48.3669727720486]
ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.
This paper describes our system which is based on pre-trained transformers.
arXiv Detail & Related papers (2020-09-17T09:28:07Z) - BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual
statements with deep pre-trained language representation models [6.853018135783218]
This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals.
The challenge focused on detecting whether a given statement contains a counterfactual.
We found RoBERTa LRM to perform the best in both subtasks.
arXiv Detail & Related papers (2020-07-28T11:16:11Z) - IITK-RSA at SemEval-2020 Task 5: Detecting Counterfactuals [3.0396370700420063]
This paper describes our efforts in tackling Task 5 of SemEval-2020.
The task involved detecting a class of textual expressions known as counterfactuals.
Counterfactual statements describe events that have not or could not have occurred and the possible implications of such events.
arXiv Detail & Related papers (2020-07-21T14:45:53Z) - SemEval-2020 Task 4: Commonsense Validation and Explanation [24.389998904122244]
SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), includes three subtasks.
We aim to evaluate whether a system can distinguish a natural language statement that makes sense to humans from one that does not.
For Subtask A and Subtask B, the performances of top-ranked systems are close to that of humans.
arXiv Detail & Related papers (2020-07-01T04:41:05Z) - Counterfactual Detection meets Transfer Learning [48.82717416666232]
We show that detecting Counterfactuals is a straightforward Binary Classification Task that can be implemented with minimal adaptation on already existing model Architectures.
We introduce a new end to end pipeline to process antecedents and consequents as an entity recognition task, thus adapting them into Token Classification.
arXiv Detail & Related papers (2020-05-27T02:02:57Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.