Enhancing Emotion Prediction in News Headlines: Insights from ChatGPT and Seq2Seq Models for Free-Text Generation
- URL: http://arxiv.org/abs/2407.10091v1
- Date: Sun, 14 Jul 2024 06:04:11 GMT
- Title: Enhancing Emotion Prediction in News Headlines: Insights from ChatGPT and Seq2Seq Models for Free-Text Generation
- Authors: Ge Gao, Jongin Kim, Sejin Paik, Ekaterina Novozhilova, Yi Liu, Sarah T. Bonna, Margrit Betke, Derry Tanti Wijaya,
- Abstract summary: We use people's explanations of their emotion, written in free-text, on how they feel after reading a news headline.
For emotion classification, the free-text explanations have a strong correlation with the dominant emotion elicited by the headlines.
Using McNemar's significance test, methods that incorporate GPT-generated free-text explanations demonstrated significant improvement.
- Score: 19.520889098893395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting emotions elicited by news headlines can be challenging as the task is largely influenced by the varying nature of people's interpretations and backgrounds. Previous works have explored classifying discrete emotions directly from news headlines. We provide a different approach to tackling this problem by utilizing people's explanations of their emotion, written in free-text, on how they feel after reading a news headline. Using the dataset BU-NEmo+ (Gao et al., 2022), we found that for emotion classification, the free-text explanations have a strong correlation with the dominant emotion elicited by the headlines. The free-text explanations also contain more sentimental context than the news headlines alone and can serve as a better input to emotion classification models. Therefore, in this work we explored generating emotion explanations from headlines by training a sequence-to-sequence transformer model and by using pretrained large language model, ChatGPT (GPT-4). We then used the generated emotion explanations for emotion classification. In addition, we also experimented with training the pretrained T5 model for the intermediate task of explanation generation before fine-tuning it for emotion classification. Using McNemar's significance test, methods that incorporate GPT-generated free-text emotion explanations demonstrated significant improvement (P-value < 0.05) in emotion classification from headlines, compared to methods that only use headlines. This underscores the value of using intermediate free-text explanations for emotion prediction tasks with headlines.
Related papers
- Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Towards Emotion-Based Synthetic Consciousness: Using LLMs to Estimate
Emotion Probability Vectors [0.32634122554913997]
This paper shows how LLMs may be used to estimate a summary of the emotional state associated with piece of text.
The summary of emotional state is a dictionary of words used to describe emotion together with the probability of the word appearing after a prompt.
arXiv Detail & Related papers (2023-10-09T13:29:36Z) - Emotion and Sentiment Guided Paraphrasing [3.5027291542274366]
We introduce a new task of fine-grained emotional paraphrasing along emotion gradients.
We reconstruct several widely used paraphrasing datasets by augmenting the input and target texts with their fine-grained emotion labels.
We propose a framework for emotion and sentiment guided paraphrasing by leveraging pre-trained language models for conditioned text generation.
arXiv Detail & Related papers (2023-06-08T20:59:40Z) - Unsupervised Extractive Summarization of Emotion Triggers [56.50078267340738]
We develop new unsupervised learning models that can jointly detect emotions and summarize their triggers.
Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion information from external sources combined with a language understanding module.
arXiv Detail & Related papers (2023-06-02T11:07:13Z) - Automatic Emotion Experiencer Recognition [12.447379545167642]
We show that experiencer detection in text is a challenging task, with a precision of.82 and a recall of.56 (F1 =.66)
We show that experiencer detection in text is a challenging task, with a precision of.82 and a recall of.56 (F1 =.66)
arXiv Detail & Related papers (2023-05-26T08:33:28Z) - Why Do You Feel This Way? Summarizing Triggers of Emotions in Social
Media Posts [61.723046082145416]
We introduce CovidET (Emotions and their Triggers during Covid-19), a dataset of 1,900 English Reddit posts related to COVID-19.
We develop strong baselines to jointly detect emotions and summarize emotion triggers.
Our analyses show that CovidET presents new challenges in emotion-specific summarization, as well as multi-emotion detection in long social media posts.
arXiv Detail & Related papers (2022-10-22T19:10:26Z) - CEFER: A Four Facets Framework based on Context and Emotion embedded
features for Implicit and Explicit Emotion Recognition [2.5137859989323537]
We propose a framework that analyses text at both the sentence and word levels.
We name it CEFER (Context and Emotion embedded Framework for Emotion Recognition)
CEFER combines the emotional vector of each word, including explicit and implicit emotions, with the feature vector of each word based on context.
arXiv Detail & Related papers (2022-09-28T11:16:32Z) - Experiencers, Stimuli, or Targets: Which Semantic Roles Enable Machine
Learning to Infer the Emotions? [9.374871304813638]
We train emotion classification models on annotated datasets with at least one semantic role.
We find that across multiple corpora, stimuli and targets carry emotion information, while the experiencer might be considered a confounder.
arXiv Detail & Related papers (2020-11-03T10:01:44Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - Emotion Carrier Recognition from Personal Narratives [74.24768079275222]
Personal Narratives (PNs) are recollections of facts, events, and thoughts from one's own experience.
We propose a novel task for Narrative Understanding: Emotion Carrier Recognition (ECR)
arXiv Detail & Related papers (2020-08-17T17:16:08Z) - Annotation of Emotion Carriers in Personal Narratives [69.07034604580214]
We are interested in the problem of understanding personal narratives (PN) - spoken or written - recollections of facts, events, and thoughts.
In PN, emotion carriers are the speech or text segments that best explain the emotional state of the user.
This work proposes and evaluates an annotation model for identifying emotion carriers in spoken personal narratives.
arXiv Detail & Related papers (2020-02-27T15:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.