Nowruz at SemEval-2022 Task 7: Tackling Cloze Tests with Transformers
and Ordinal Regression
- URL: http://arxiv.org/abs/2204.00556v1
- Date: Fri, 1 Apr 2022 16:36:10 GMT
- Title: Nowruz at SemEval-2022 Task 7: Tackling Cloze Tests with Transformers
and Ordinal Regression
- Authors: Mohammadmahdi Nouriborji, Omid Rohanian, David Clifton
- Abstract summary: This paper outlines the system using which team Nowruz participated in SemEval 2022 Task 7 Identifying Plausible Clarifications of Implicit and Underspecified Phrases.
- Score: 1.9078991171384017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper outlines the system using which team Nowruz participated in
SemEval 2022 Task 7 Identifying Plausible Clarifications of Implicit and
Underspecified Phrases for both subtasks A and B. Using a pre-trained
transformer as a backbone, the model targeted the task of multi-task
classification and ranking in the context of finding the best fillers for a
cloze task related to instructional texts on the website Wikihow.
The system employed a combination of two ordinal regression components to
tackle this task in a multi-task learning scenario. According to the official
leaderboard of the shared task, this system was ranked 5th in the ranking and
7th in the classification subtasks out of 21 participating teams. With
additional experiments, the models have since been further optimised.
Related papers
- SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes [48.83290963506378]
This paper presents the results of the SHROOM, a shared task focused on detecting hallucinations.
We observe a number of key trends in how this approach was tackled.
While a majority of the teams did outperform our proposed baseline system, the performances of top-scoring systems are still consistent with a random handling of the more challenging items.
arXiv Detail & Related papers (2024-03-12T15:06:22Z) - Mavericks at ArAIEval Shared Task: Towards a Safer Digital Space --
Transformer Ensemble Models Tackling Deception and Persuasion [0.0]
We present our approaches for task 1-A and task 2-A of the shared task which focus on persuasion technique detection and disinformation detection respectively.
The tasks use multigenre snippets of tweets and news articles for the given binary classification problem.
We achieved a micro F1-score of 0.742 on task 1-A (8th rank on the leaderboard) and 0.901 on task 2-A (7th rank on the leaderboard) respectively.
arXiv Detail & Related papers (2023-11-30T17:26:57Z) - X-PuDu at SemEval-2022 Task 7: A Replaced Token Detection Task
Pre-trained Model with Pattern-aware Ensembling for Identifying Plausible
Clarifications [13.945286351253717]
This paper describes our winning system on SemEval 2022 Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in instructional texts.
A replaced token detection pre-trained model is utilized with minorly different task-specific heads for SubTask-A: Multi-class Classification and SubTask-B: Ranking.
Our system achieves a 68.90% accuracy score and 0.8070 spearman's rank correlation score surpassing the 2nd place with a large margin by 2.7 and 2.2 percent points for SubTask-A and SubTask-B, respectively.
arXiv Detail & Related papers (2022-11-27T05:46:46Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent
Reinforcement Learning [122.47938710284784]
We propose a novel framework for learning dynamic subtask assignment (LDSA) in cooperative MARL.
To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy.
We show that LDSA learns reasonable and effective subtask assignment for better collaboration.
arXiv Detail & Related papers (2022-05-05T10:46:16Z) - IISERB Brains at SemEval 2022 Task 6: A Deep-learning Framework to
Identify Intended Sarcasm in English [6.46316101972863]
This paper describes the system architectures and the models submitted by our team "IISERBBrains" to SemEval 2022 Task 6 competition.
We also report the other models and results that we obtained through our experiments after organizers published the gold labels of their evaluation data.
arXiv Detail & Related papers (2022-03-04T11:23:54Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor
Based on Multi-Task Adversarial Training [4.691435917434472]
This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackathon: Detecting and Rating Humor and Offense.
This task aims to detect whether the text is humorous and how humorous it is.
We mainly present our solution, a multi-task learning model based on adversarial examples.
arXiv Detail & Related papers (2021-04-21T03:23:02Z) - ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for
Counterfactual Statement Modeling [48.3669727720486]
ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.
This paper describes our system which is based on pre-trained transformers.
arXiv Detail & Related papers (2020-09-17T09:28:07Z) - Solomon at SemEval-2020 Task 11: Ensemble Architecture for Fine-Tuned
Propaganda Detection in News Articles [0.3232625980782302]
This paper describes our system (Solomon) details and results of participation in the SemEval 2020 Task 11 "Detection of Propaganda Techniques in News Articles"
We used RoBERTa based transformer architecture for fine-tuning on the propaganda dataset.
Compared to the other participating systems, our submission is ranked 4th on the leaderboard.
arXiv Detail & Related papers (2020-09-16T05:00:40Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.