Automatically Classifying Emotions based on Text: A Comparative
Exploration of Different Datasets
- URL: http://arxiv.org/abs/2302.14727v1
- Date: Tue, 28 Feb 2023 16:34:55 GMT
- Title: Automatically Classifying Emotions based on Text: A Comparative
Exploration of Different Datasets
- Authors: Anna Koufakou, Jairo Garciga, Adam Paul, Joseph Morelli and
Christopher Frank
- Abstract summary: We focus on three datasets that were recently presented in the related literature.
We explore the performance of traditional as well as state-of-the-art deep learning models in the presence of different characteristics in the data.
Our experimental work shows that state-of-the-art models such as RoBERTa perform the best for all cases.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Emotion Classification based on text is a task with many applications which
has received growing interest in recent years. This paper presents a
preliminary study with the goal to help researchers and practitioners gain
insight into relatively new datasets as well as emotion classification in
general. We focus on three datasets that were recently presented in the related
literature, and we explore the performance of traditional as well as
state-of-the-art deep learning models in the presence of different
characteristics in the data. We also explore the use of data augmentation in
order to improve performance. Our experimental work shows that state-of-the-art
models such as RoBERTa perform the best for all cases. We also provide
observations and discussion that highlight the complexity of emotion
classification in these datasets and test out the applicability of the models
to actual social media posts we collected and labeled.
Related papers
- Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions [62.12545440385489]
Large language models (LLMs) have brought substantial advancements in text generation, but their potential for enhancing classification tasks remains underexplored.
We propose a framework for thoroughly investigating fine-tuning LLMs for classification, including both generation- and encoding-based approaches.
We instantiate this framework in edit intent classification (EIC), a challenging and underexplored classification task.
arXiv Detail & Related papers (2024-10-02T20:48:28Z) - Semantic-Based Active Perception for Humanoid Visual Tasks with Foveal Sensors [49.99728312519117]
The aim of this work is to establish how accurately a recent semantic-based active perception model is able to complete visual tasks that are regularly performed by humans.
This model exploits the ability of current object detectors to localize and classify a large number of object classes and to update a semantic description of a scene across multiple fixations.
In the task of scene exploration, the semantic-based method demonstrates superior performance compared to the traditional saliency-based model.
arXiv Detail & Related papers (2024-04-16T18:15:57Z) - Data Augmentation for Emotion Detection in Small Imbalanced Text Data [0.0]
One of the challenges is the shortage of available datasets that have been annotated with emotions.
We studied the impact of data augmentation techniques precisely when applied to small imbalanced datasets.
Our experimental results show that using the augmented data when training the classifier model leads to significant improvements.
arXiv Detail & Related papers (2023-10-25T21:29:36Z) - Self-supervised Activity Representation Learning with Incremental Data:
An Empirical Study [7.782045150068569]
This research examines the impact of using a self-supervised representation learning model for time series classification tasks.
We analyzed the effect of varying the size, distribution, and source of the unlabeled data on the final classification performance across four public datasets.
arXiv Detail & Related papers (2023-05-01T01:39:55Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z) - Improving the Generalizability of Text-Based Emotion Detection by
Leveraging Transformers with Psycholinguistic Features [27.799032561722893]
We propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features.
We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach.
arXiv Detail & Related papers (2022-12-19T13:58:48Z) - Learning Action-Effect Dynamics from Pairs of Scene-graphs [50.72283841720014]
We propose a novel method that leverages scene-graph representation of images to reason about the effects of actions described in natural language.
Our proposed approach is effective in terms of performance, data efficiency, and generalization capability compared to existing models.
arXiv Detail & Related papers (2022-12-07T03:36:37Z) - A Comparative Study of Data Augmentation Techniques for Deep Learning
Based Emotion Recognition [11.928873764689458]
We conduct a comprehensive evaluation of popular deep learning approaches for emotion recognition.
We show that long-range dependencies in the speech signal are critical for emotion recognition.
Speed/rate augmentation offers the most robust performance gain across models.
arXiv Detail & Related papers (2022-11-09T17:27:03Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Building for Tomorrow: Assessing the Temporal Persistence of Text
Classifiers [18.367109894193486]
Performance of text classification models can drop over time when new data to be classified is more distant in time from the data used for training.
This raises important research questions on the design of text classification models intended to persist over time.
We perform longitudinal classification experiments on three datasets spanning between 6 and 19 years.
arXiv Detail & Related papers (2022-05-11T12:21:14Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.