Leveraging the power of transformers for guilt detection in text
- URL: http://arxiv.org/abs/2401.07414v1
- Date: Mon, 15 Jan 2024 01:40:39 GMT
- Title: Leveraging the power of transformers for guilt detection in text
- Authors: Abdul Gafar Manuel Meque, Jason Angel, Grigori Sidorov, Alexander
Gelbukh
- Abstract summary: This research explores the applicability of three transformer-based language models for detecting guilt in text.
Our proposed model outformed BERT and RoBERTa models by two and one points respectively.
- Score: 50.65526700061155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, language models and deep learning techniques have
revolutionized natural language processing tasks, including emotion detection.
However, the specific emotion of guilt has received limited attention in this
field. In this research, we explore the applicability of three
transformer-based language models for detecting guilt in text and compare their
performance for general emotion detection and guilt detection. Our proposed
model outformed BERT and RoBERTa models by two and one points respectively.
Additionally, we analyze the challenges in developing accurate guilt-detection
models and evaluate our model's effectiveness in detecting related emotions
like "shame" through qualitative analysis of results.
Related papers
- ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and
Emotion Modeling [0.0]
We present a novel solution by employing a mixture of experts, multiple encoders, to offer distinct perspectives on the emotional state of the user's utterance.
We propose an end-to-end model architecture called ASEM that performs emotion analysis on top of sentiment analysis for open-domain chatbots.
arXiv Detail & Related papers (2024-02-25T20:36:51Z) - GuReT: Distinguishing Guilt and Regret related Text [44.740281698788166]
This paper introduces a dataset tailored to dissect the relationship between guilt and regret and their unique textual markers.
Our approach treats guilt and regret recognition as a binary classification task and employs three machine learning and six transformer-based deep learning techniques to benchmark the newly created dataset.
arXiv Detail & Related papers (2024-01-29T20:20:44Z) - Assaying on the Robustness of Zero-Shot Machine-Generated Text Detectors [57.7003399760813]
We explore advanced Large Language Models (LLMs) and their specialized variants, contributing to this field in several ways.
We uncover a significant correlation between topics and detection performance.
These investigations shed light on the adaptability and robustness of these detection methods across diverse topics.
arXiv Detail & Related papers (2023-12-20T10:53:53Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - DEMASQ: Unmasking the ChatGPT Wordsmith [63.8746084667206]
We propose an effective ChatGPT detector named DEMASQ, which accurately identifies ChatGPT-generated content.
Our method addresses two critical factors: (i) the distinct biases in text composition observed in human- and machine-generated content and (ii) the alterations made by humans to evade previous detection methods.
arXiv Detail & Related papers (2023-11-08T21:13:05Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - Guilt Detection in Text: A Step Towards Understanding Complex Emotions [58.720142291102135]
We introduce a novel Natural Language Processing task called Guilt detection.
We identify guilt as a complex and vital emotion that has not been previously studied in NLP.
To address the lack of publicly available corpora for guilt detection, we created VIC, a dataset containing 4622 texts.
arXiv Detail & Related papers (2023-03-06T21:36:19Z) - The Sensitivity of Word Embeddings-based Author Detection Models to
Semantic-preserving Adversarial Perturbations [3.7552532139404797]
Authorship analysis is an important subject in the field of natural language processing.
This paper explores the limitations and sensitiveness of established approaches to adversarial manipulations of inputs.
arXiv Detail & Related papers (2021-02-23T19:55:45Z) - Adapting a Language Model for Controlled Affective Text Generation [2.9267797650223653]
We adapt the state-of-the-art language generation models to generate affective (emotional) text.
We propose to incorporate emotion as prior for the probabilistic state-of-the-art text generation model such as GPT-2.
The model gives a user the flexibility to control the category and intensity of emotion as well as the topic of the generated text.
arXiv Detail & Related papers (2020-11-08T15:24:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.