Emotion Detection From Social Media Posts
- URL: http://arxiv.org/abs/2302.05610v1
- Date: Sat, 11 Feb 2023 05:52:33 GMT
- Title: Emotion Detection From Social Media Posts
- Authors: Md Mahbubur Rahman, Shaila Shova
- Abstract summary: We address the topic of identifying emotions from text data obtained from social media posts like Twitter.
We have deployed different traditional machine learning techniques such as Support Vector Machines (SVM), Naive Bayes, Decision Trees, and Random Forest, as well as deep neural network models such as LSTM, CNN, GRU, BiLSTM, BiGRU.
- Score: 2.3605348648054463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last few years, social media has evolved into a medium for
expressing personal views, emotions, and even business and political proposals,
recommendations, and advertisements. We address the topic of identifying
emotions from text data obtained from social media posts like Twitter in this
research. We have deployed different traditional machine learning techniques
such as Support Vector Machines (SVM), Naive Bayes, Decision Trees, and Random
Forest, as well as deep neural network models such as LSTM, CNN, GRU, BiLSTM,
BiGRU to classify these tweets into four emotion categories (Fear, Anger, Joy,
and Sadness). Furthermore, we have constructed a BiLSTM and BiGRU ensemble
model. The evaluation result shows that the deep neural network models(BiGRU,
to be specific) produce the most promising results compared to traditional
machine learning models, with an 87.53 % accuracy rate. The ensemble model
performs even better (87.66 %), albeit the difference is not significant. This
result will aid in the development of a decision-making tool that visualizes
emotional fluctuations.
Related papers
- Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks [0.0]
This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data.
In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users.
arXiv Detail & Related papers (2025-03-26T02:39:54Z) - Leveraging Cross-Attention Transformer and Multi-Feature Fusion for Cross-Linguistic Speech Emotion Recognition [60.58049741496505]
Speech Emotion Recognition (SER) plays a crucial role in enhancing human-computer interaction.
We propose a novel approach HuMP-CAT, which combines HuBERT, MFCC, and prosodic characteristics.
We show that, by fine-tuning the source model with a small portion of speech from the target datasets, HuMP-CAT achieves an average accuracy of 78.75%.
arXiv Detail & Related papers (2025-01-06T14:31:25Z) - MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - Emotion Detection in Reddit: Comparative Study of Machine Learning and Deep Learning Techniques [0.0]
This study concentrates on text-based emotion detection by leveraging the GoEmotions dataset.
We employed a range of models for this task, including six machine learning models, three ensemble models, and a Long Short-Term Memory (LSTM) model.
Results indicate that the Stacking classifier outperforms other models in accuracy and performance.
arXiv Detail & Related papers (2024-11-15T16:28:25Z) - Deep Imbalanced Learning for Multimodal Emotion Recognition in
Conversations [15.705757672984662]
Multimodal Emotion Recognition in Conversations (MERC) is a significant development direction for machine intelligence.
Many data in MERC naturally exhibit an imbalanced distribution of emotion categories, and researchers ignore the negative impact of imbalanced data on emotion recognition.
We propose the Class Boundary Enhanced Representation Learning (CBERL) model to address the imbalanced distribution of emotion categories in raw data.
We have conducted extensive experiments on the IEMOCAP and MELD benchmark datasets, and the results show that CBERL has achieved a certain performance improvement in the effectiveness of emotion recognition.
arXiv Detail & Related papers (2023-12-11T12:35:17Z) - Socratis: Are large multimodal models emotionally aware? [63.912414283486555]
Existing emotion prediction benchmarks do not consider the diversity of emotions that an image and text can elicit in humans due to various reasons.
We propose Socratis, a societal reactions benchmark, where each image-caption (IC) pair is annotated with multiple emotions and the reasons for feeling them.
We benchmark the capability of state-of-the-art multimodal large language models to generate the reasons for feeling an emotion given an IC pair.
arXiv Detail & Related papers (2023-08-31T13:59:35Z) - Political Sentiment Analysis of Persian Tweets Using CNN-LSTM Model [0.356008609689971]
We present several machine learning and a deep learning model to analysis sentiment of Persian political tweets.
Deep learning with ParsBERT embedding performs better than machine learning.
arXiv Detail & Related papers (2023-07-15T08:08:38Z) - Depression detection in social media posts using affective and social
norm features [84.12658971655253]
We propose a deep architecture for depression detection from social media posts.
We incorporate profanity and morality features of posts and words in our architecture using a late fusion scheme.
The inclusion of the proposed features yields state-of-the-art results in both settings.
arXiv Detail & Related papers (2023-03-24T21:26:27Z) - Emotion Detection From Tweets Using a BERT and SVM Ensemble Model [0.0]
We investigate the use of Support Vector Machine and Bidirectional Representations from Transformers for emotion recognition.
We propose a novel ensemble model by combining the two BERT and SVM models.
Experiments show that the proposed model achieves a state-of-the-art accuracy of 0.91 on emotion recognition in tweets.
arXiv Detail & Related papers (2022-08-09T05:32:29Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional
Text-to-Speech Model [56.75775793011719]
We introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation.
Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding.
In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations.
arXiv Detail & Related papers (2021-06-17T08:34:21Z) - Towards Emotion Recognition in Hindi-English Code-Mixed Data: A
Transformer Based Approach [0.0]
We present a Hinglish dataset labelled for emotion detection.
We highlight a deep learning based approach for detecting emotions in Hindi-English code mixed tweets.
arXiv Detail & Related papers (2021-02-19T14:07:20Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.