Detecting Political Biases of Named Entities and Hashtags on Twitter
- URL: http://arxiv.org/abs/2209.08110v1
- Date: Fri, 16 Sep 2022 18:00:13 GMT
- Title: Detecting Political Biases of Named Entities and Hashtags on Twitter
- Authors: Zhiping Xiao and Jeffrey Zhu and Yining Wang and Pei Zhou and Wen Hong
Lam and Mason A. Porter and Yizhou Sun
- Abstract summary: Ideological divisions in the United States have become increasingly prominent in daily communication.
By detecting political biases in a corpus of text, one can attempt to describe and discern the polarity of that text.
We propose the Polarity-aware Embedding Multi-task learning model.
- Score: 28.02430167720734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ideological divisions in the United States have become increasingly prominent
in daily communication. Accordingly, there has been much research on political
polarization, including many recent efforts that take a computational
perspective. By detecting political biases in a corpus of text, one can attempt
to describe and discern the polarity of that text. Intuitively, the named
entities (i.e., the nouns and phrases that act as nouns) and hashtags in text
often carry information about political views. For example, people who use the
term "pro-choice" are likely to be liberal, whereas people who use the term
"pro-life" are likely to be conservative. In this paper, we seek to reveal
political polarities in social-media text data and to quantify these polarities
by explicitly assigning a polarity score to entities and hashtags. Although
this idea is straightforward, it is difficult to perform such inference in a
trustworthy quantitative way. Key challenges include the small number of known
labels, the continuous spectrum of political views, and the preservation of
both a polarity score and a polarity-neutral semantic meaning in an embedding
vector of words. To attempt to overcome these challenges, we propose the
Polarity-aware Embedding Multi-task learning (PEM) model. This model consists
of (1) a self-supervised context-preservation task, (2) an attention-based
tweet-level polarity-inference task, and (3) an adversarial learning task that
promotes independence between an embedding's polarity dimension and its
semantic dimensions. Our experimental results demonstrate that our PEM model
can successfully learn polarity-aware embeddings. We examine a variety of
applications and we thereby demonstrate the effectiveness of our PEM model. We
also discuss important limitations of our work and stress caution when applying
the PEM model to real-world scenarios.
Related papers
- Large Language Models Reflect the Ideology of their Creators [73.25935570218375]
Large language models (LLMs) are trained on vast amounts of data to generate natural language.
We uncover notable diversity in the ideological stance exhibited across different LLMs and languages.
arXiv Detail & Related papers (2024-10-24T04:02:30Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Dialectograms: Machine Learning Differences between Discursive
Communities [0.0]
We take a step towards leveraging the richness of the full embedding space by using word embeddings to map out how words are used differently.
We provide a new measure of the degree to which words are used differently that overcomes the tendency for existing measures to pick out low frequent or polysemous words.
arXiv Detail & Related papers (2023-02-11T11:32:08Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - Millions of Co-purchases and Reviews Reveal the Spread of Polarization
and Lifestyle Politics across Online Markets [68.8204255655161]
We study the pervasiveness of polarization and lifestyle politics over different product segments in a diverse market.
We sample 234.6 million relations among 21.8 million market entities to find product categories that are politically relevant, aligned, and polarized.
Cultural products are 4 times more polarized than any other segment.
arXiv Detail & Related papers (2022-01-17T18:16:37Z) - Detecting Polarized Topics in COVID-19 News Using Partisanship-aware
Contextualized Topic Embeddings [3.9761027576939405]
Growing polarization of the news media has been blamed for fanning disagreement, controversy and even violence.
We propose Partisanship-aware Contextualized Topic Embeddings (PaCTE), a method to automatically detect polarized topics from partisan news sources.
arXiv Detail & Related papers (2021-04-15T23:05:52Z) - Exploring Polarization of Users Behavior on Twitter During the 2019
South American Protests [15.065938163384235]
We explore polarization on Twitter in a different context, namely the protest that paralyzed several countries in the South American region in 2019.
By leveraging users' endorsement of politicians' tweets and hashtag campaigns with defined stances towards the protest (for or against), we construct a weakly labeled stance dataset with millions of users.
We find empirical evidence of the "filter bubble" phenomenon during the event, as we not only show that the user bases are homogeneous in terms of stance, but the probability that a user transitions from media of different clusters is low.
arXiv Detail & Related papers (2021-04-05T07:13:18Z) - Political Depolarization of News Articles Using Attribute-aware Word
Embeddings [7.411577497708497]
Political polarization in the US is on the rise.
This polarization negatively affects the public sphere by contributing to the creation of ideological echo chambers.
We introduce a framework for depolarizing news articles.
arXiv Detail & Related papers (2021-01-05T07:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.