ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for
Counterfactual Statement Modeling
- URL: http://arxiv.org/abs/2009.08171v1
- Date: Thu, 17 Sep 2020 09:28:07 GMT
- Title: ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for
Counterfactual Statement Modeling
- Authors: Yaojie Lu and Annan Li and Hongyu Lin and Xianpei Han and Le Sun
- Abstract summary: ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.
This paper describes our system which is based on pre-trained transformers.
- Score: 48.3669727720486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting
counterfactual statements and detecting antecedent and consequence. This paper
describes our system which is based on pre-trained transformers. For the first
subtask, we train several transformer-based classifiers for detecting
counterfactual statements. For the second subtask, we formulate antecedent and
consequence extraction as a query-based question answering problem. The two
subsystems both achieved third place in the evaluation. Our system is openly
released at https://github.com/casnlu/ISCAS-SemEval2020Task5.
Related papers
- Context-Aware Transformer Pre-Training for Answer Sentence Selection [102.7383811376319]
We propose three pre-training objectives designed to mimic the downstream fine-tuning task of contextual AS2.
Our experiments show that our pre-training approaches can improve baseline contextual AS2 accuracy by up to 8% on some datasets.
arXiv Detail & Related papers (2023-05-24T17:10:45Z) - Pre-training Transformer Models with Sentence-Level Objectives for
Answer Sentence Selection [99.59693674455582]
We propose three novel sentence-level transformer pre-training objectives that incorporate paragraph-level semantics within and across documents.
Our experiments on three public and one industrial AS2 datasets demonstrate the empirical superiority of our pre-trained transformers over baseline models.
arXiv Detail & Related papers (2022-05-20T22:39:00Z) - Paragraph-based Transformer Pre-training for Multi-Sentence Inference [99.59693674455582]
We show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks.
We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences.
arXiv Detail & Related papers (2022-05-02T21:41:14Z) - Answer Generation for Retrieval-based Question Answering Systems [80.28727681633096]
We train a sequence to sequence transformer model to generate an answer from a candidate set.
Our tests on three English AS2 datasets show improvement up to 32 absolute points in accuracy over the state of the art.
arXiv Detail & Related papers (2021-06-02T05:45:49Z) - BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual
statements with deep pre-trained language representation models [6.853018135783218]
This paper describes BUT-FIT's submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals.
The challenge focused on detecting whether a given statement contains a counterfactual.
We found RoBERTa LRM to perform the best in both subtasks.
arXiv Detail & Related papers (2020-07-28T11:16:11Z) - CS-NET at SemEval-2020 Task 4: Siamese BERT for ComVE [2.0491741153610334]
This paper describes a system for distinguishing between statements that confirm to common sense and those that do not.
We use a parallel instance of transformers, which is responsible for a boost in the performance.
We achieved an accuracy of 94.8% in subtask A and 89% in subtask B on the test set.
arXiv Detail & Related papers (2020-07-21T14:08:02Z) - LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation
using Pretraining Language Model [5.428461405329692]
This paper describes our submission to subtask a and b of SemEval-2020 Task 4.
For subtask a, we use a ALBERT based model with improved input form to pick out the common sense statement from two statement candidates.
For subtask b, we use a multiple choice model enhanced by hint sentence mechanism to select the reason from given options about why a statement is against common sense.
arXiv Detail & Related papers (2020-07-06T05:51:10Z) - Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for
Counterfactual Statement Analysis [0.0]
We use a BERT base model for the classification task and build a hybrid BERT Multi-Layer Perceptron system to handle the sequence identification task.
Our experiments show that while introducing syntactic and semantic features does little in improving the system in the classification task, using these types of features as cascaded linear inputs to fine-tune the sequence-delimiting ability of the model ensures it outperforms other similar-purpose complex systems like BiLSTM-CRF in the second task.
arXiv Detail & Related papers (2020-05-18T08:19:18Z) - The Cascade Transformer: an Application for Efficient Answer Sentence
Selection [116.09532365093659]
We introduce the Cascade Transformer, a technique to adapt transformer-based models into a cascade of rankers.
When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy.
arXiv Detail & Related papers (2020-05-05T23:32:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.