Does Commonsense help in detecting Sarcasm?
- URL: http://arxiv.org/abs/2109.08588v1
- Date: Fri, 17 Sep 2021 15:07:38 GMT
- Title: Does Commonsense help in detecting Sarcasm?
- Authors: Somnath Basu Roy Chowdhury and Snigdha Chaturvedi
- Abstract summary: Sarcasm detection is important for several NLP tasks such as sentiment identification in product reviews, user feedback, and online forums.
In this paper, we investigate whether incorporating commonsense knowledge helps in sarcasm detection.
Our experiments with three sarcasm detection datasets indicate that the approach does not outperform the baseline model.
- Score: 20.78285964841612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sarcasm detection is important for several NLP tasks such as sentiment
identification in product reviews, user feedback, and online forums. It is a
challenging task requiring a deep understanding of language, context, and world
knowledge. In this paper, we investigate whether incorporating commonsense
knowledge helps in sarcasm detection. For this, we incorporate commonsense
knowledge into the prediction process using a graph convolution network with
pre-trained language model embeddings as input. Our experiments with three
sarcasm detection datasets indicate that the approach does not outperform the
baseline model. We perform an exhaustive set of experiments to analyze where
commonsense support adds value and where it hurts classification. Our
implementation is publicly available at:
https://github.com/brcsomnath/commonsense-sarcasm.
Related papers
- Interpretable Bangla Sarcasm Detection using BERT and Explainable AI [0.3914676152740142]
A BERT-based system can achieve 99.60% while the utilized traditional machine learning algorithms are only capable of achieving 89.93%.
This dataset consists of fresh records of sarcastic and non-sarcastic comments, the majority of which are acquired from Facebook and YouTube comment sections.
arXiv Detail & Related papers (2023-03-22T17:35:35Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Sarcasm Detection Framework Using Emotion and Sentiment Features [62.997667081978825]
We propose a model which incorporates emotion and sentiment features to capture the incongruity intrinsic to sarcasm.
Our approach achieved state-of-the-art results on four datasets from social networking platforms and online media.
arXiv Detail & Related papers (2022-11-23T15:14:44Z) - Towards Multi-Modal Sarcasm Detection via Hierarchical Congruity
Modeling with Knowledge Enhancement [31.97249246223621]
Sarcasm is a linguistic phenomenon indicating a discrepancy between literal meanings and implied intentions.
Most existing techniques only modeled the atomic-level inconsistencies between the text input and its accompanying image.
We propose a novel hierarchical framework for sarcasm detection by exploring both the atomic-level congruity based on multi-head cross attention mechanism and the composition-level congruity based on graph neural networks.
arXiv Detail & Related papers (2022-10-07T12:44:33Z) - Computational Sarcasm Analysis on Social Media: A Systematic Review [0.23488056916440855]
Sarcasm can be defined as saying or writing the opposite of what one truly wants to express, usually to insult, irritate, or amuse someone.
Because of the obscure nature of sarcasm in textual data, detecting it is difficult and of great interest to the sentiment analysis research community.
arXiv Detail & Related papers (2022-09-13T17:20:19Z) - Blow the Dog Whistle: A Chinese Dataset for Cant Understanding with
Common Sense and World Knowledge [49.288196234823005]
Cant is important for understanding advertising, comedies and dog-whistle politics.
We propose a large and diverse Chinese dataset for creating and understanding cant.
arXiv Detail & Related papers (2021-04-06T17:55:43Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Interpretable Multi-Head Self-Attention model for Sarcasm Detection in
social media [0.0]
Inherent ambiguity in sarcastic expressions, make sarcasm detection very difficult.
We develop an interpretable deep learning model using multi-head self-attention and gated recurrent units.
We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets.
arXiv Detail & Related papers (2021-01-14T21:39:35Z) - Sarcasm Detection using Context Separators in Online Discourse [3.655021726150369]
Sarcasm is an intricate form of speech, where meaning is conveyed implicitly.
In this work, we use RoBERTa_large to detect sarcasm in two datasets.
We also assert the importance of context in improving the performance of contextual word embedding models.
arXiv Detail & Related papers (2020-06-01T10:52:35Z) - $R^3$: Reverse, Retrieve, and Rank for Sarcasm Generation with
Commonsense Knowledge [51.70688120849654]
We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.
Our method employs a retrieve-and-edit framework to instantiate two major characteristics of sarcasm.
arXiv Detail & Related papers (2020-04-28T02:30:09Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.