Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock
Humorous Headlines
- URL: http://arxiv.org/abs/2009.02795v1
- Date: Sun, 6 Sep 2020 18:34:54 GMT
- Title: Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock
Humorous Headlines
- Authors: Shuning Jin, Yue Yin, XianE Tang, Ted Pedersen
- Abstract summary: Inspired by the incongruity theory of humor, we use a contrastive approach to capture the surprise in the edited headlines.
In the official evaluation, our system gets 0.531 RMSE in Subtask 1, 11th among 49 submissions.
- Score: 1.8782241143922103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We use pretrained transformer-based language models in SemEval-2020 Task 7:
Assessing the Funniness of Edited News Headlines. Inspired by the incongruity
theory of humor, we use a contrastive approach to capture the surprise in the
edited headlines. In the official evaluation, our system gets 0.531 RMSE in
Subtask 1, 11th among 49 submissions. In Subtask 2, our system gets 0.632
accuracy, 9th among 32 submissions.
Related papers
- ThangDLU at #SMM4H 2024: Encoder-decoder models for classifying text data on social disorders in children and adolescents [49.00494558898933]
This paper describes our participation in Task 3 and Task 5 of the #SMM4H (Social Media Mining for Health) 2024 Workshop.
Task 3 is a multi-class classification task centered on tweets discussing the impact of outdoor environments on symptoms of social anxiety.
Task 5 involves a binary classification task focusing on tweets reporting medical disorders in children.
We applied transfer learning from pre-trained encoder-decoder models such as BART-base and T5-small to identify the labels of a set of given tweets.
arXiv Detail & Related papers (2024-04-30T17:06:20Z) - SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in
Conversation (EDiReF) [61.49972925493912]
SemEval-2024 Task 10 is a shared task centred on identifying emotions in code-mixed dialogues.
This task comprises three distinct subtasks - emotion recognition in conversation for code-mixed dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip reasoning for English dialogues.
A total of 84 participants engaged in this task, with the most adept systems attaining F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks.
arXiv Detail & Related papers (2024-02-29T08:20:06Z) - HIT-SCIR at MMNLU-22: Consistency Regularization for Multilingual Spoken
Language Understanding [56.756090143062536]
We propose to use consistency regularization based on a hybrid data augmentation strategy.
We conduct experiments on the MASSIVE dataset under both full-dataset and zero-shot settings.
Our proposed method improves the performance on both intent detection and slot filling tasks.
arXiv Detail & Related papers (2023-01-05T11:21:15Z) - Findings of the WMT 2022 Shared Task on Translation Suggestion [63.457874930232926]
We report the result of the first edition of the WMT shared task on Translation Suggestion.
The task aims to provide alternatives for specific words or phrases given the entire documents generated by machine translation (MT)
It consists two sub-tasks, namely, the naive translation suggestion and translation suggestion with hints.
arXiv Detail & Related papers (2022-11-30T03:48:36Z) - Overview of Abusive and Threatening Language Detection in Urdu at FIRE
2021 [50.591267188664666]
We present two shared tasks of abusive and threatening language detection for the Urdu language.
We present two manually annotated datasets containing tweets labelled as (i) Abusive and Non-Abusive, and (ii) Threatening and Non-Threatening.
For both subtasks, m-Bert based transformer model showed the best performance.
arXiv Detail & Related papers (2022-07-14T07:38:13Z) - Nowruz at SemEval-2022 Task 7: Tackling Cloze Tests with Transformers
and Ordinal Regression [1.9078991171384017]
This paper outlines the system using which team Nowruz participated in SemEval 2022 Task 7 Identifying Plausible Clarifications of Implicit and Underspecified Phrases.
arXiv Detail & Related papers (2022-04-01T16:36:10Z) - MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor
Based on Multi-Task Adversarial Training [4.691435917434472]
This paper describes MagicPai's system for SemEval 2021 Task 7, HaHackathon: Detecting and Rating Humor and Offense.
This task aims to detect whether the text is humorous and how humorous it is.
We mainly present our solution, a multi-task learning model based on adversarial examples.
arXiv Detail & Related papers (2021-04-21T03:23:02Z) - ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for
Counterfactual Statement Modeling [48.3669727720486]
ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.
This paper describes our system which is based on pre-trained transformers.
arXiv Detail & Related papers (2020-09-17T09:28:07Z) - SemEval-2020 Task 7: Assessing Humor in Edited News Headlines [9.78014714425501]
This paper describes the SemEval-2020 shared task "Assessing Humor in Edited News Headlines"
The task's dataset contains news headlines in which short edits were applied to make them funny, and the funniness of these edited headlines was rated using crowdsourcing.
To date, this task is the most popular shared computational humor task, attracting 48 teams for the first subtask and 31 teams for the second.
arXiv Detail & Related papers (2020-08-01T17:34:37Z) - NoPropaganda at SemEval-2020 Task 11: A Borrowed Approach to Sequence
Tagging and Text Classification [0.0]
This paper describes our contribution to SemEval-2020 Task 11: Detection Of Propaganda Techniques In News Articles.
We start with simple LSTM baselines and move to an autoregressive transformer decoder to predict long continuous propaganda spans for the first subtask.
We also adopt an approach from relation extraction by enveloping spans mentioned above with special tokens for the second subtask of propaganda technique classification.
arXiv Detail & Related papers (2020-07-25T11:35:57Z) - LMVE at SemEval-2020 Task 4: Commonsense Validation and Explanation
using Pretraining Language Model [5.428461405329692]
This paper describes our submission to subtask a and b of SemEval-2020 Task 4.
For subtask a, we use a ALBERT based model with improved input form to pick out the common sense statement from two statement candidates.
For subtask b, we use a multiple choice model enhanced by hint sentence mechanism to select the reason from given options about why a statement is against common sense.
arXiv Detail & Related papers (2020-07-06T05:51:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.