Analyzing Sentiment Polarity Reduction in News Presentation through
Contextual Perturbation and Large Language Models
- URL: http://arxiv.org/abs/2402.02145v1
- Date: Sat, 3 Feb 2024 13:27:32 GMT
- Title: Analyzing Sentiment Polarity Reduction in News Presentation through
Contextual Perturbation and Large Language Models
- Authors: Alapan Kuila, Somnath Jena, Sudeshna Sarkar, Partha Pratim Chakrabarti
- Abstract summary: This paper introduces a novel approach to tackle this problem by reducing the polarity of latent sentiments in news content.
We employ transformation constraints to modify sentences while preserving their core semantics.
Our experiments and human evaluations demonstrate the effectiveness of these two models in achieving reduced sentiment polarity with minimal modifications.
- Score: 1.8512070255576754
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In today's media landscape, where news outlets play a pivotal role in shaping
public opinion, it is imperative to address the issue of sentiment manipulation
within news text. News writers often inject their own biases and emotional
language, which can distort the objectivity of reporting. This paper introduces
a novel approach to tackle this problem by reducing the polarity of latent
sentiments in news content. Drawing inspiration from adversarial attack-based
sentence perturbation techniques and a prompt based method using ChatGPT, we
employ transformation constraints to modify sentences while preserving their
core semantics. Using three perturbation methods: replacement, insertion, and
deletion coupled with a context-aware masked language model, we aim to maximize
the desired sentiment score for targeted news aspects through a beam search
algorithm. Our experiments and human evaluations demonstrate the effectiveness
of these two models in achieving reduced sentiment polarity with minimal
modifications while maintaining textual similarity, fluency, and grammatical
correctness. Comparative analysis confirms the competitive performance of the
adversarial attack based perturbation methods and prompt-based methods,
offering a promising solution to foster more objective news reporting and
combat emotional language bias in the media.
Related papers
- MSynFD: Multi-hop Syntax aware Fake News Detection [27.046529059563863]
Social media platforms have fueled the rapid dissemination of fake news, posing threats to our real-life society.
Existing methods use multimodal data or contextual information to enhance the detection of fake news.
We propose a novel multi-hop syntax aware fake news detection (MSynFD) method, which incorporates complementary syntax information to deal with subtle twists in fake news.
arXiv Detail & Related papers (2024-02-18T05:40:33Z) - Lost In Translation: Generating Adversarial Examples Robust to
Round-Trip Translation [66.33340583035374]
We present a comprehensive study on the robustness of current text adversarial attacks to round-trip translation.
We demonstrate that 6 state-of-the-art text-based adversarial attacks do not maintain their efficacy after round-trip translation.
We introduce an intervention-based solution to this problem, by integrating Machine Translation into the process of adversarial example generation.
arXiv Detail & Related papers (2023-07-24T04:29:43Z) - Conflicts, Villains, Resolutions: Towards models of Narrative Media
Framing [19.589945994234075]
We revisit a widely used conceptualization of framing from the communication sciences which explicitly captures elements of narratives.
We adapt an effective annotation paradigm that breaks a complex annotation task into a series of simpler binary questions.
We explore automatic multi-label prediction of our frames with supervised and semi-supervised approaches.
arXiv Detail & Related papers (2023-06-03T08:50:13Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Beyond Model Interpretability: On the Faithfulness and Adversarial
Robustness of Contrastive Textual Explanations [2.543865489517869]
This work motivates textual counterfactuals by laying the ground for a novel evaluation scheme inspired by the faithfulness of explanations.
Experiments on sentiment analysis data show that the connectedness of counterfactuals to their original counterparts is not obvious in both models.
arXiv Detail & Related papers (2022-10-17T09:50:02Z) - A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models
with Adversarial Learning [55.96577490779591]
Vision-language models can encode societal biases and stereotypes.
There are challenges to measuring and mitigating these multimodal harms.
We investigate bias measures and apply ranking metrics for image-text representations.
arXiv Detail & Related papers (2022-03-22T17:59:04Z) - An Interdisciplinary Approach for the Automated Detection and
Visualization of Media Bias in News Articles [0.0]
I aim to devise data sets and methods to identify media bias.
My vision is to devise a system that helps news readers become aware of media coverage differences caused by bias.
arXiv Detail & Related papers (2021-12-26T10:46:32Z) - Detecting Polarized Topics in COVID-19 News Using Partisanship-aware
Contextualized Topic Embeddings [3.9761027576939405]
Growing polarization of the news media has been blamed for fanning disagreement, controversy and even violence.
We propose Partisanship-aware Contextualized Topic Embeddings (PaCTE), a method to automatically detect polarized topics from partisan news sources.
arXiv Detail & Related papers (2021-04-15T23:05:52Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.