Modeling the Severity of Complaints in Social Media
- URL: http://arxiv.org/abs/2103.12428v1
- Date: Tue, 23 Mar 2021 10:13:11 GMT
- Title: Modeling the Severity of Complaints in Social Media
- Authors: Mali Jin and Nikolaos Aletras
- Abstract summary: Linguistic theory of pragmatics categorizes complaints into various severity levels based on the face-threat that the complainer is willing to undertake.
This is particularly useful for understanding the intent of complainers and how humans develop suitable apology strategies.
We study the severity level of complaints for the first time in computational linguistics.
- Score: 9.909170013118775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The speech act of complaining is used by humans to communicate a negative
mismatch between reality and expectations as a reaction to an unfavorable
situation. Linguistic theory of pragmatics categorizes complaints into various
severity levels based on the face-threat that the complainer is willing to
undertake. This is particularly useful for understanding the intent of
complainers and how humans develop suitable apology strategies. In this paper,
we study the severity level of complaints for the first time in computational
linguistics. To facilitate this, we enrich a publicly available data set of
complaints with four severity categories and train different transformer-based
networks combined with linguistic information achieving 55.7 macro F1. We also
jointly model binary complaint classification and complaint severity in a
multi-task setting achieving new state-of-the-art results on binary complaint
detection reaching up to 88.2 macro F1. Finally, we present a qualitative
analysis of the behavior of our models in predicting complaint severity levels.
Related papers
- Intent-conditioned and Non-toxic Counterspeech Generation using Multi-Task Instruction Tuning with RLAIF [14.2594830589926]
Counterspeech, defined as a response to online hate speech, is increasingly used as a non-censorial solution.
Our study introduces CoARL, a novel framework enhancing counterspeech generation by modeling the pragmatic implications underlying social biases in hateful statements.
CoARL's first two phases involve sequential multi-instruction tuning, teaching the model to understand intents, reactions, and harms of offensive statements, and then learning task-specific low-rank adapter weights for generating intent-conditioned counterspeech.
arXiv Detail & Related papers (2024-03-15T08:03:49Z) - SOUL: Towards Sentiment and Opinion Understanding of Language [96.74878032417054]
We propose a new task called Sentiment and Opinion Understanding of Language (SOUL)
SOUL aims to evaluate sentiment understanding through two subtasks: Review (RC) and Justification Generation (JG)
arXiv Detail & Related papers (2023-10-27T06:48:48Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - Measuring the Effect of Influential Messages on Varying Personas [67.1149173905004]
We present a new task, Response Forecasting on Personas for News Media, to estimate the response a persona might have upon seeing a news message.
The proposed task not only introduces personalization in the modeling but also predicts the sentiment polarity and intensity of each response.
This enables more accurate and comprehensive inference on the mental state of the persona.
arXiv Detail & Related papers (2023-05-25T21:01:00Z) - Visual Perturbation-aware Collaborative Learning for Overcoming the
Language Prior Problem [60.0878532426877]
We propose a novel collaborative learning scheme from the viewpoint of visual perturbation calibration.
Specifically, we devise a visual controller to construct two sorts of curated images with different perturbation extents.
The experimental results on two diagnostic VQA-CP benchmark datasets evidently demonstrate its effectiveness.
arXiv Detail & Related papers (2022-07-24T23:50:52Z) - Improved two-stage hate speech classification for twitter based on Deep
Neural Networks [0.0]
Hate speech is a form of online harassment that involves the use of abusive language.
The model we propose in this work is an extension of an existing approach based on LSTM neural network architectures.
Our study includes a performance comparison of several proposed alternative methods for the second stage evaluated on a public corpus of 16k tweets.
arXiv Detail & Related papers (2022-06-08T20:57:41Z) - Analyzing the Intensity of Complaints on Social Media [55.140613801802886]
We present the first study in computational linguistics of measuring the intensity of complaints from text.
We create the first Chinese dataset containing 3,103 posts about complaints from Weibo, a popular Chinese social media platform.
We show that complaints intensity can be accurately estimated by computational models with the best mean square error achieving 0.11.
arXiv Detail & Related papers (2022-04-20T10:15:44Z) - Complaint Identification in Social Media with Transformer Networks [34.35466601628141]
Complaining is a speech act extensively used by humans to communicate a negative inconsistency between reality and expectations.
Previous work on automatically identifying complaints in social media has focused on using feature-based and task-specific neural network models.
We adapt state-of-the-art pre-trained neural language models and their combinations with other linguistic information from topics or sentiment for complaint prediction.
arXiv Detail & Related papers (2020-10-21T11:44:04Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Dialogue Response Ranking Training with Large-Scale Human Feedback Data [52.12342165926226]
We leverage social media feedback data to build a large-scale training dataset for feedback prediction.
We trained DialogRPT, a set of GPT-2 based models on 133M pairs of human feedback data.
Our ranker outperforms the conventional dialog perplexity baseline with a large margin on predicting Reddit feedback.
arXiv Detail & Related papers (2020-09-15T10:50:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.