Incorporating Emotions into Health Mention Classification Task on Social
Media
- URL: http://arxiv.org/abs/2212.05039v1
- Date: Fri, 9 Dec 2022 18:38:41 GMT
- Title: Incorporating Emotions into Health Mention Classification Task on Social
Media
- Authors: Olanrewaju Tahir Aduragba, Jialin Yu and Alexandra I. Cristea
- Abstract summary: We present a framework for health mention classification that incorporates affective features.
We evaluate our approach on 5 HMC-related datasets from different social media platforms.
Our results indicate that HMC models infused with emotional knowledge are an effective alternative.
- Score: 70.23889100356091
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The health mention classification (HMC) task is the process of identifying
and classifying mentions of health-related concepts in text. This can be useful
for identifying and tracking the spread of diseases through social media posts.
However, this is a non-trivial task. Here we build on recent studies suggesting
that using emotional information may improve upon this task. Our study results
in a framework for health mention classification that incorporates affective
features. We present two methods, an intermediate task fine-tuning approach
(implicit) and a multi-feature fusion approach (explicit) to incorporate
emotions into our target task of HMC. We evaluated our approach on 5
HMC-related datasets from different social media platforms including three from
Twitter, one from Reddit and another from a combination of social media
sources. Extensive experiments demonstrate that our approach results in
statistically significant performance gains on HMC tasks. By using the
multi-feature fusion approach, we achieve at least a 3% improvement in F1 score
over BERT baselines across all datasets. We also show that considering only
negative emotions does not significantly affect performance on the HMC task.
Additionally, our results indicate that HMC models infused with emotional
knowledge are an effective alternative, especially when other HMC datasets are
unavailable for domain-specific fine-tuning. The source code for our models is
freely available at https://github.com/tahirlanre/Emotion_PHM.
Related papers
- What Makes Good Collaborative Views? Contrastive Mutual Information Maximization for Multi-Agent Perception [52.41695608928129]
Multi-agent perception (MAP) allows autonomous systems to understand complex environments by interpreting data from multiple sources.
This paper investigates intermediate collaboration for MAP with a specific focus on exploring "good" properties of collaborative view.
We propose a novel framework named CMiMC for intermediate collaboration.
arXiv Detail & Related papers (2024-03-15T07:18:55Z) - A Two-Stage Multimodal Emotion Recognition Model Based on Graph
Contrastive Learning [13.197551708300345]
We propose a two-stage emotion recognition model based on graph contrastive learning (TS-GCL)
We show that TS-GCL has superior performance on IEMOCAP and MELD datasets compared with previous methods.
arXiv Detail & Related papers (2024-01-03T01:58:31Z) - Deep Imbalanced Learning for Multimodal Emotion Recognition in
Conversations [15.705757672984662]
Multimodal Emotion Recognition in Conversations (MERC) is a significant development direction for machine intelligence.
Many data in MERC naturally exhibit an imbalanced distribution of emotion categories, and researchers ignore the negative impact of imbalanced data on emotion recognition.
We propose the Class Boundary Enhanced Representation Learning (CBERL) model to address the imbalanced distribution of emotion categories in raw data.
We have conducted extensive experiments on the IEMOCAP and MELD benchmark datasets, and the results show that CBERL has achieved a certain performance improvement in the effectiveness of emotion recognition.
arXiv Detail & Related papers (2023-12-11T12:35:17Z) - Data Augmentation for Emotion Detection in Small Imbalanced Text Data [0.0]
One of the challenges is the shortage of available datasets that have been annotated with emotions.
We studied the impact of data augmentation techniques precisely when applied to small imbalanced datasets.
Our experimental results show that using the augmented data when training the classifier model leads to significant improvements.
arXiv Detail & Related papers (2023-10-25T21:29:36Z) - TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion
Synthesis [59.465092047829835]
We present TMR, a simple yet effective approach for text to 3D human motion retrieval.
Our method extends the state-of-the-art text-to-motion synthesis model TEMOS.
We show that maintaining the motion generation loss, along with the contrastive training, is crucial to obtain good performance.
arXiv Detail & Related papers (2023-05-02T17:52:41Z) - A Marker-based Neural Network System for Extracting Social Determinants
of Health [12.6970199179668]
Social determinants of health (SDoH) on patients' healthcare quality and the disparity is well-known.
Many SDoH items are not coded in structured forms in electronic health records.
We explore a multi-stage pipeline involving named entity recognition (NER), relation classification (RC), and text classification methods to extract SDoH information from clinical notes automatically.
arXiv Detail & Related papers (2022-12-24T18:40:23Z) - Nested Named Entity Recognition from Medical Texts: An Adaptive Shared
Network Architecture with Attentive CRF [53.55504611255664]
We propose a novel method, referred to as ASAC, to solve the dilemma caused by the nested phenomenon.
The proposed method contains two key modules: the adaptive shared (AS) part and the attentive conditional random field (ACRF) module.
Our model could learn better entity representations by capturing the implicit distinctions and relationships between different categories of entities.
arXiv Detail & Related papers (2022-11-09T09:23:56Z) - Multimodal Emotion Recognition with Modality-Pairwise Unsupervised
Contrastive Loss [80.79641247882012]
We focus on unsupervised feature learning for Multimodal Emotion Recognition (MER)
We consider discrete emotions, and as modalities text, audio and vision are used.
Our method, as being based on contrastive loss between pairwise modalities, is the first attempt in MER literature.
arXiv Detail & Related papers (2022-07-23T10:11:24Z) - Adding more data does not always help: A study in medical conversation
summarization with PEGASUS [5.276054618115727]
We study the effect of dataset size on transfer learning medical conversation summarization using PEG.
We also evaluate various iterative labeling strategies in the low-data regime, following their success in the classification setting.
Our work sheds light on the successes and challenges of translating low-data regime techniques in classification to medical conversation summarization.
arXiv Detail & Related papers (2021-11-15T07:27:35Z) - ConsNet: Learning Consistency Graph for Zero-Shot Human-Object
Interaction Detection [101.56529337489417]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of human, action, object> in images.
We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs.
Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities.
arXiv Detail & Related papers (2020-08-14T09:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.