Team Phoenix at WASSA 2021: Emotion Analysis on News Stories with
Pre-Trained Language Models
- URL: http://arxiv.org/abs/2103.06057v1
- Date: Wed, 10 Mar 2021 14:00:54 GMT
- Title: Team Phoenix at WASSA 2021: Emotion Analysis on News Stories with
Pre-Trained Language Models
- Authors: Yash Butala, Kanishk Singh, Adarsh Kumar and Shrey Shrivastava
- Abstract summary: We describe our system entry for the WASSA 2021 Shared Task.
Our proposed models achieved an Average Pearson Score of 0.417 and a Macro-F1 Score of 0.502 in Track 1 and Track 2, respectively.
- Score: 1.6536018920603175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotion is fundamental to humanity. The ability to perceive, understand and
respond to social interactions in a human-like manner is one of the most
desired capabilities in artificial agents, particularly in social-media bots.
Over the past few years, computational understanding and detection of emotional
aspects in language have been vital in advancing human-computer interaction.
The WASSA Shared Task 2021 released a dataset of news-stories across two
tracks, Track-1 for Empathy and Distress Prediction and Track-2 for
Multi-Dimension Emotion prediction at the essay-level. We describe our system
entry for the WASSA 2021 Shared Task (for both Track-1 and Track-2), where we
leveraged the information from Pre-trained language models for Track-specific
Tasks. Our proposed models achieved an Average Pearson Score of 0.417 and a
Macro-F1 Score of 0.502 in Track 1 and Track 2, respectively. In the Shared
Task leaderboard, we secured 4th rank in Track 1 and 2nd rank in Track 2.
Related papers
- Towards More Accurate Prediction of Human Empathy and Emotion in Text and Multi-turn Conversations by Combining Advanced NLP, Transformers-based Networks, and Linguistic Methodologies [0.0]
We predict the level of empathic concern and personal distress displayed in essays.
Based on the WASSA 2022 Shared Task on Empathy Detection and Emotion Classification, we implement a Feed-Forward Neural Network.
As part of the final stage, these approaches have been adapted to the WASSA 2023 Shared Task on Empathy Emotion and Personality Detection in Interactions.
arXiv Detail & Related papers (2024-07-26T04:01:27Z) - ThangDLU at #SMM4H 2024: Encoder-decoder models for classifying text data on social disorders in children and adolescents [49.00494558898933]
This paper describes our participation in Task 3 and Task 5 of the #SMM4H (Social Media Mining for Health) 2024 Workshop.
Task 3 is a multi-class classification task centered on tweets discussing the impact of outdoor environments on symptoms of social anxiety.
Task 5 involves a binary classification task focusing on tweets reporting medical disorders in children.
We applied transfer learning from pre-trained encoder-decoder models such as BART-base and T5-small to identify the labels of a set of given tweets.
arXiv Detail & Related papers (2024-04-30T17:06:20Z) - PetKaz at SemEval-2024 Task 3: Advancing Emotion Classification with an LLM for Emotion-Cause Pair Extraction in Conversations [4.463184061618504]
We present our submission to the SemEval-2023 Task3 "The Competition of Multimodal Emotion Cause Analysis in Conversations"
Our approach relies on combining fine-tuned GPT-3.5 for emotion classification and a BiLSTM-based neural network to detect causes.
arXiv Detail & Related papers (2024-04-08T13:25:03Z) - Social-Transmotion: Promptable Human Trajectory Prediction [65.80068316170613]
Social-Transmotion is a generic Transformer-based model that exploits diverse and numerous visual cues to predict human behavior.
Our approach is validated on multiple datasets, including JTA, JRDB, Pedestrians and Cyclists in Road Traffic, and ETH-UCY.
arXiv Detail & Related papers (2023-12-26T18:56:49Z) - Bag of Tricks for Effective Language Model Pretraining and Downstream
Adaptation: A Case Study on GLUE [93.98660272309974]
This report briefly describes our submission Vega v1 on the General Language Understanding Evaluation leaderboard.
GLUE is a collection of nine natural language understanding tasks, including question answering, linguistic acceptability, sentiment analysis, text similarity, paraphrase detection, and natural language inference.
With our optimized pretraining and fine-tuning strategies, our 1.3 billion model sets new state-of-the-art on 4/9 tasks, achieving the best average score of 91.3.
arXiv Detail & Related papers (2023-02-18T09:26:35Z) - BJTU-WeChat's Systems for the WMT22 Chat Translation Task [66.81525961469494]
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German.
Based on the Transformer, we apply several effective variants.
Our systems achieve 0.810 and 0.946 COMET scores.
arXiv Detail & Related papers (2022-11-28T02:35:04Z) - WASSA@IITK at WASSA 2021: Multi-task Learning and Transformer Finetuning
for Emotion Classification and Empathy Prediction [0.0]
This paper describes our contribution to the WASSA 2021 shared task on Empathy Prediction and Emotion Classification.
The broad goal of this task was to model an empathy score, a distress score and the overall level of emotion of an essay written in response to a newspaper article associated with harm to someone.
We have used the ELECTRA model abundantly and also advanced deep learning approaches like multi-task learning.
arXiv Detail & Related papers (2021-04-20T08:24:10Z) - PVG at WASSA 2021: A Multi-Input, Multi-Task, Transformer-Based
Architecture for Empathy and Distress Prediction [0.0]
We propose a multi-input, multi-task framework for the task of empathy score prediction.
For the distress score prediction task, the system is boosted by the addition of lexical features.
Our submission ranked 1$st$ based on the average correlation (0.545) as well as the distress correlation (0.574), and 2$nd$ for the empathy Pearson correlation (0.517)
arXiv Detail & Related papers (2021-03-04T20:12:25Z) - NEMO: Frequentist Inference Approach to Constrained Linguistic Typology
Feature Prediction in SIGTYP 2020 Shared Task [83.43738174234053]
We employ frequentist inference to represent correlations between typological features and use this representation to train simple multi-class estimators that predict individual features.
Our best configuration achieved the micro-averaged accuracy score of 0.66 on 149 test languages.
arXiv Detail & Related papers (2020-10-12T19:25:43Z) - WOLI at SemEval-2020 Task 12: Arabic Offensive Language Identification
on Different Twitter Datasets [0.0]
A key to fight offensive language on social media is the existence of an automatic offensive language detection system.
In this paper, we describe the system submitted by WideBot AI Lab for the shared task which ranked 10th out of 52 participants with Macro-F1 86.9%.
We also introduced a neural network approach that enhanced the predictive ability of our system that includes CNN, highway network, Bi-LSTM, and attention layers.
arXiv Detail & Related papers (2020-09-11T14:10:03Z) - It Is Not the Journey but the Destination: Endpoint Conditioned
Trajectory Prediction [59.027152973975575]
We present Predicted Conditioned Network (PECNet) for flexible human trajectory prediction.
PECNet infers distant endpoints to assist in long-range multi-modal trajectory prediction.
We show that PECNet improves state-of-the-art performance on the Stanford Drone trajectory prediction benchmark by 20.9% and on the ETH/UCY benchmark by 40.8%.
arXiv Detail & Related papers (2020-04-04T21:27:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.