Guilt Detection in Text: A Step Towards Understanding Complex Emotions
- URL: http://arxiv.org/abs/2303.03510v1
- Date: Mon, 6 Mar 2023 21:36:19 GMT
- Title: Guilt Detection in Text: A Step Towards Understanding Complex Emotions
- Authors: Abdul Gafar Manuel Meque, Nisar Hussain, Grigori Sidorov, and
Alexander Gelbukh
- Abstract summary: We introduce a novel Natural Language Processing task called Guilt detection.
We identify guilt as a complex and vital emotion that has not been previously studied in NLP.
To address the lack of publicly available corpora for guilt detection, we created VIC, a dataset containing 4622 texts.
- Score: 58.720142291102135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel Natural Language Processing (NLP) task called Guilt
detection, which focuses on detecting guilt in text. We identify guilt as a
complex and vital emotion that has not been previously studied in NLP, and we
aim to provide a more fine-grained analysis of it. To address the lack of
publicly available corpora for guilt detection, we created VIC, a dataset
containing 4622 texts from three existing emotion detection datasets that we
binarized into guilt and no-guilt classes. We experimented with traditional
machine learning methods using bag-of-words and term frequency-inverse document
frequency features, achieving a 72% f1 score with the highest-performing model.
Our study provides a first step towards understanding guilt in text and opens
the door for future research in this area.
Related papers
- Towards a Generative Approach for Emotion Detection and Reasoning [0.7366405857677227]
We introduce a novel approach to zero-shot emotion detection and emotional reasoning using large language models.
Our paper is the first work on using a generative approach to jointly address the tasks of emotion detection and emotional reasoning for texts.
arXiv Detail & Related papers (2024-08-09T07:20:15Z) - Detecting Machine-Generated Texts: Not Just "AI vs Humans" and Explainability is Complicated [8.77447722226144]
We introduce a novel ternary text classification scheme, adding an "undecided" category for texts that could be attributed to either source.
This research shifts the paradigm from merely classifying to explaining machine-generated texts, emphasizing need for detectors to provide clear and understandable explanations to users.
arXiv Detail & Related papers (2024-06-26T11:11:47Z) - Leveraging the power of transformers for guilt detection in text [50.65526700061155]
This research explores the applicability of three transformer-based language models for detecting guilt in text.
Our proposed model outformed BERT and RoBERTa models by two and one points respectively.
arXiv Detail & Related papers (2024-01-15T01:40:39Z) - Assaying on the Robustness of Zero-Shot Machine-Generated Text Detectors [57.7003399760813]
We explore advanced Large Language Models (LLMs) and their specialized variants, contributing to this field in several ways.
We uncover a significant correlation between topics and detection performance.
These investigations shed light on the adaptability and robustness of these detection methods across diverse topics.
arXiv Detail & Related papers (2023-12-20T10:53:53Z) - DEMASQ: Unmasking the ChatGPT Wordsmith [63.8746084667206]
We propose an effective ChatGPT detector named DEMASQ, which accurately identifies ChatGPT-generated content.
Our method addresses two critical factors: (i) the distinct biases in text composition observed in human- and machine-generated content and (ii) the alterations made by humans to evade previous detection methods.
arXiv Detail & Related papers (2023-11-08T21:13:05Z) - Unsupervised Extractive Summarization of Emotion Triggers [56.50078267340738]
We develop new unsupervised learning models that can jointly detect emotions and summarize their triggers.
Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion information from external sources combined with a language understanding module.
arXiv Detail & Related papers (2023-06-02T11:07:13Z) - Can AI-Generated Text be Reliably Detected? [54.670136179857344]
Unregulated use of LLMs can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc.
Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques.
In this paper, we show that these detectors are not reliable in practical scenarios.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - A Semantic Approach to Negation Detection and Word Disambiguation with
Natural Language Processing [1.0499611180329804]
This study aims to demonstrate the methods for detecting negations in a sentence by uniquely evaluating the lexical structure of the text.
The proposed method examined all the unique features of the related expressions within a text to resolve the contextual usage of the sentence.
arXiv Detail & Related papers (2023-02-05T03:58:45Z) - Leveraging Sentiment Analysis Knowledge to Solve Emotion Detection Tasks [11.928873764689458]
We present a Transformer-based model with a Fusion of Adapter layers to improve the emotion detection task on large scale dataset.
We obtained state-of-the-art results for emotion recognition on CMU-MOSEI even while using only the textual modality.
arXiv Detail & Related papers (2021-11-05T20:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.