Emotion Detection From Tweets Using a BERT and SVM Ensemble Model
- URL: http://arxiv.org/abs/2208.04547v1
- Date: Tue, 9 Aug 2022 05:32:29 GMT
- Title: Emotion Detection From Tweets Using a BERT and SVM Ensemble Model
- Authors: Ionu\c{t}-Alexandru Albu, Stelian Sp\^inu
- Abstract summary: We investigate the use of Support Vector Machine and Bidirectional Representations from Transformers for emotion recognition.
We propose a novel ensemble model by combining the two BERT and SVM models.
Experiments show that the proposed model achieves a state-of-the-art accuracy of 0.91 on emotion recognition in tweets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automatic identification of emotions expressed in Twitter data has a wide
range of applications. We create a well-balanced dataset by adding a neutral
class to a benchmark dataset consisting of four emotions: fear, sadness, joy,
and anger. On this extended dataset, we investigate the use of Support Vector
Machine (SVM) and Bidirectional Encoder Representations from Transformers
(BERT) for emotion recognition. We propose a novel ensemble model by combining
the two BERT and SVM models. Experiments show that the proposed model achieves
a state-of-the-art accuracy of 0.91 on emotion recognition in tweets.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - Emotion Detection in Reddit: Comparative Study of Machine Learning and Deep Learning Techniques [0.0]
This study concentrates on text-based emotion detection by leveraging the GoEmotions dataset.
We employed a range of models for this task, including six machine learning models, three ensemble models, and a Long Short-Term Memory (LSTM) model.
Results indicate that the Stacking classifier outperforms other models in accuracy and performance.
arXiv Detail & Related papers (2024-11-15T16:28:25Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Unified Visual Relationship Detection with Vision and Language Models [89.77838890788638]
This work focuses on training a single visual relationship detector predicting over the union of label spaces from multiple datasets.
We propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models.
Empirical results on both human-object interaction detection and scene-graph generation demonstrate the competitive performance of our model.
arXiv Detail & Related papers (2023-03-16T00:06:28Z) - Persian Emotion Detection using ParsBERT and Imbalanced Data Handling
Approaches [0.0]
EmoPars and ArmanEmo are two new human-labeled emotion datasets for the Persian language.
We evaluate EmoPars and compare them with ArmanEmo.
Our model reaches a Macro-averaged F1-score of 0.81 and 0.76 on ArmanEmo and EmoPars, respectively.
arXiv Detail & Related papers (2022-11-15T10:22:49Z) - DeepEmotex: Classifying Emotion in Text Messages using Deep Transfer
Learning [0.0]
We propose DeepEmotex an effective sequential transfer learning method to detect emotion in text.
We conduct an experimental study using both curated Twitter data sets and benchmark data sets.
DeepEmotex models achieve over 91% accuracy for multi-class emotion classification on test dataset.
arXiv Detail & Related papers (2022-06-12T03:23:40Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
Emotion Recognition [118.73025093045652]
We propose a pre-training model textbfMEmoBERT for multimodal emotion recognition.
Unlike the conventional "pre-train, finetune" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction.
Our proposed MEmoBERT significantly enhances emotion recognition performance.
arXiv Detail & Related papers (2021-10-27T09:57:00Z) - Multimodal Emotion Recognition with High-level Speech and Text Features [8.141157362639182]
We propose a novel cross-representation speech model to perform emotion recognition on wav2vec 2.0 speech features.
We also train a CNN-based model to recognize emotions from text features extracted with Transformer-based models.
Our method is evaluated on the IEMOCAP dataset in a 4-class classification problem.
arXiv Detail & Related papers (2021-09-29T07:08:40Z) - Towards Emotion Recognition in Hindi-English Code-Mixed Data: A
Transformer Based Approach [0.0]
We present a Hinglish dataset labelled for emotion detection.
We highlight a deep learning based approach for detecting emotions in Hindi-English code mixed tweets.
arXiv Detail & Related papers (2021-02-19T14:07:20Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.