An Evaluation of State-of-the-Art Large Language Models for Sarcasm
Detection
- URL: http://arxiv.org/abs/2312.03706v1
- Date: Sat, 7 Oct 2023 14:45:43 GMT
- Title: An Evaluation of State-of-the-Art Large Language Models for Sarcasm
Detection
- Authors: Juliann Zhou
- Abstract summary: Sarcasm is the use of words by someone who means the opposite of what he is trying to say.
Recent innovations in NLP have provided more possibilities for detecting sarcasm.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Sarcasm, as defined by Merriam-Webster, is the use of words by someone who
means the opposite of what he is trying to say. In the field of sentimental
analysis of Natural Language Processing, the ability to correctly identify
sarcasm is necessary for understanding people's true opinions. Because the use
of sarcasm is often context-based, previous research has used language
representation models, such as Support Vector Machine (SVM) and Long Short-Term
Memory (LSTM), to identify sarcasm with contextual-based information. Recent
innovations in NLP have provided more possibilities for detecting sarcasm. In
BERT: Pre-training of Deep Bidirectional Transformers for Language
Understanding, Jacob Devlin et al. (2018) introduced a new language
representation model and demonstrated higher precision in interpreting
contextualized language. As proposed by Hazarika et al. (2018), CASCADE is a
context-driven model that produces good results for detecting sarcasm. This
study analyzes a Reddit corpus using these two state-of-the-art models and
evaluates their performance against baseline models to find the ideal approach
to sarcasm detection.
Related papers
- A Survey of Multimodal Sarcasm Detection [32.659528422756416]
Sarcasm is a rhetorical device that is used to convey the opposite of the literal meaning of an utterance.
We present the first comprehensive survey on multimodal sarcasm detection to date.
arXiv Detail & Related papers (2024-10-24T16:17:47Z) - Sarcasm Detection in a Less-Resourced Language [0.0]
We build a sarcasm detection dataset for a less-resourced language, such as Slovenian.
We leverage two modern techniques: a machine translation specific medium-size transformer model, and a very large generative language model.
The results show that larger models generally outperform smaller ones and that ensembling can slightly improve sarcasm detection performance.
arXiv Detail & Related papers (2024-10-16T16:10:59Z) - Sentiment-enhanced Graph-based Sarcasm Explanation in Dialogue [67.09698638709065]
We propose a novel sEntiment-enhanceD Graph-based multimodal sarcasm Explanation framework, named EDGE.
In particular, we first propose a lexicon-guided utterance sentiment inference module, where a utterance sentiment refinement strategy is devised.
We then develop a module named Joint Cross Attention-based Sentiment Inference (JCA-SI) by extending the multimodal sentiment analysis model JCA to derive the joint sentiment label for each video-audio clip.
arXiv Detail & Related papers (2024-02-06T03:14:46Z) - Sarcasm Detection Framework Using Emotion and Sentiment Features [62.997667081978825]
We propose a model which incorporates emotion and sentiment features to capture the incongruity intrinsic to sarcasm.
Our approach achieved state-of-the-art results on four datasets from social networking platforms and online media.
arXiv Detail & Related papers (2022-11-23T15:14:44Z) - How to Describe Images in a More Funny Way? Towards a Modular Approach
to Cross-Modal Sarcasm Generation [62.89586083449108]
We study a new problem of cross-modal sarcasm generation (CMSG), i.e., generating a sarcastic description for a given image.
CMSG is challenging as models need to satisfy the characteristics of sarcasm, as well as the correlation between different modalities.
We propose an Extraction-Generation-Ranking based Modular method (EGRM) for cross-model sarcasm generation.
arXiv Detail & Related papers (2022-11-20T14:38:24Z) - Computational Sarcasm Analysis on Social Media: A Systematic Review [0.23488056916440855]
Sarcasm can be defined as saying or writing the opposite of what one truly wants to express, usually to insult, irritate, or amuse someone.
Because of the obscure nature of sarcasm in textual data, detecting it is difficult and of great interest to the sentiment analysis research community.
arXiv Detail & Related papers (2022-09-13T17:20:19Z) - Parallel Deep Learning-Driven Sarcasm Detection from Pop Culture Text
and English Humor Literature [0.76146285961466]
We manually extract the sarcastic word distribution features of a benchmark pop culture sarcasm corpus.
We generate input sequences formed of the weighted vectors from such words.
Our proposed model for detecting sarcasm peaks a training accuracy of 98.95% when trained with the discussed dataset.
arXiv Detail & Related papers (2021-06-10T14:01:07Z) - Bi-ISCA: Bidirectional Inter-Sentence Contextual Attention Mechanism for
Detecting Sarcasm in User Generated Noisy Short Text [8.36639545285691]
This paper proposes a new state-of-the-art deep learning architecture that uses a novel Bidirectional Inter-Sentence Contextual Attention mechanism (Bi-ISCA)
Bi-ISCA captures inter-sentence dependencies for detecting sarcasm in the user-generated short text using only the conversational context.
The proposed deep learning model demonstrates the capability to capture explicit, implicit, and contextual incongruous words & phrases responsible for invoking sarcasm.
arXiv Detail & Related papers (2020-11-23T15:24:27Z) - Multi-timescale Representation Learning in LSTM Language Models [69.98840820213937]
Language models must capture statistical dependencies between words at timescales ranging from very short to very long.
We derived a theory for how the memory gating mechanism in long short-term memory language models can capture power law decay.
Experiments showed that LSTM language models trained on natural English text learn to approximate this theoretical distribution.
arXiv Detail & Related papers (2020-09-27T02:13:38Z) - Augmenting Data for Sarcasm Detection with Unlabeled Conversation
Context [55.898436183096614]
We present a novel data augmentation technique, CRA (Contextual Response Augmentation), which utilizes conversational context to generate meaningful samples for training.
Specifically, our proposed model, trained with the proposed data augmentation technique, participated in the sarcasm detection task of FigLang2020, have won and achieves the best performance in both Reddit and Twitter datasets.
arXiv Detail & Related papers (2020-06-11T09:00:11Z) - $R^3$: Reverse, Retrieve, and Rank for Sarcasm Generation with
Commonsense Knowledge [51.70688120849654]
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
Our method employs a retrieve-and-edit framework to instantiate two major characteristics of sarcasm.
arXiv Detail & Related papers (2020-04-28T02:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.