Leveraging the power of transformers for guilt detection in text
- URL: http://arxiv.org/abs/2401.07414v1
- Date: Mon, 15 Jan 2024 01:40:39 GMT
- Title: Leveraging the power of transformers for guilt detection in text
- Authors: Abdul Gafar Manuel Meque, Jason Angel, Grigori Sidorov, Alexander
Gelbukh
- Abstract summary: This research explores the applicability of three transformer-based language models for detecting guilt in text.
Our proposed model outformed BERT and RoBERTa models by two and one points respectively.
- Score: 50.65526700061155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, language models and deep learning techniques have
revolutionized natural language processing tasks, including emotion detection.
However, the specific emotion of guilt has received limited attention in this
field. In this research, we explore the applicability of three
transformer-based language models for detecting guilt in text and compare their
performance for general emotion detection and guilt detection. Our proposed
model outformed BERT and RoBERTa models by two and one points respectively.
Additionally, we analyze the challenges in developing accurate guilt-detection
models and evaluate our model's effectiveness in detecting related emotions
like "shame" through qualitative analysis of results.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - Emotion Detection in Reddit: Comparative Study of Machine Learning and Deep Learning Techniques [0.0]
This study concentrates on text-based emotion detection by leveraging the GoEmotions dataset.
We employed a range of models for this task, including six machine learning models, three ensemble models, and a Long Short-Term Memory (LSTM) model.
Results indicate that the Stacking classifier outperforms other models in accuracy and performance.
arXiv Detail & Related papers (2024-11-15T16:28:25Z) - ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and
Emotion Modeling [0.0]
We present a novel solution by employing a mixture of experts, multiple encoders, to offer distinct perspectives on the emotional state of the user's utterance.
We propose an end-to-end model architecture called ASEM that performs emotion analysis on top of sentiment analysis for open-domain chatbots.
arXiv Detail & Related papers (2024-02-25T20:36:51Z) - GuReT: Distinguishing Guilt and Regret related Text [44.740281698788166]
This paper introduces a dataset tailored to dissect the relationship between guilt and regret and their unique textual markers.
Our approach treats guilt and regret recognition as a binary classification task and employs three machine learning and six transformer-based deep learning techniques to benchmark the newly created dataset.
arXiv Detail & Related papers (2024-01-29T20:20:44Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - DEMASQ: Unmasking the ChatGPT Wordsmith [63.8746084667206]
We propose an effective ChatGPT detector named DEMASQ, which accurately identifies ChatGPT-generated content.
Our method addresses two critical factors: (i) the distinct biases in text composition observed in human- and machine-generated content and (ii) the alterations made by humans to evade previous detection methods.
arXiv Detail & Related papers (2023-11-08T21:13:05Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - Guilt Detection in Text: A Step Towards Understanding Complex Emotions [58.720142291102135]
We introduce a novel Natural Language Processing task called Guilt detection.
We identify guilt as a complex and vital emotion that has not been previously studied in NLP.
To address the lack of publicly available corpora for guilt detection, we created VIC, a dataset containing 4622 texts.
arXiv Detail & Related papers (2023-03-06T21:36:19Z) - The Sensitivity of Word Embeddings-based Author Detection Models to
Semantic-preserving Adversarial Perturbations [3.7552532139404797]
Authorship analysis is an important subject in the field of natural language processing.
This paper explores the limitations and sensitiveness of established approaches to adversarial manipulations of inputs.
arXiv Detail & Related papers (2021-02-23T19:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.