A Machine Learning Pipeline to Examine Political Bias with Congressional
Speeches
- URL: http://arxiv.org/abs/2109.09014v1
- Date: Sat, 18 Sep 2021 21:15:21 GMT
- Title: A Machine Learning Pipeline to Examine Political Bias with Congressional
Speeches
- Authors: Prasad hajare, Sadia Kamal, Siddharth Krishnan, and Arunkumar
Bagavathi
- Abstract summary: We give machine learning approaches to study political bias in two ideologically diverse social media forums: Gab and Twitter.
Our proposed methods exploit the use of transcripts collected from political speeches in US congress to label the data.
We also present a machine learning approach that combines features from cascades and text to forecast cascade's political bias with an accuracy of about 85%.
- Score: 0.3062386594262859
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Computational methods to model political bias in social media involve several
challenges due to heterogeneity, high-dimensional, multiple modalities, and the
scale of the data. Political bias in social media has been studied in multiple
viewpoints like media bias, political ideology, echo chambers, and
controversies using machine learning pipelines. Most of the current methods
rely heavily on the manually-labeled ground-truth data for the underlying
political bias prediction tasks. Limitations of such methods include
human-intensive labeling, labels related to only a specific problem, and the
inability to determine the near future bias state of a social media
conversation. In this work, we address such problems and give machine learning
approaches to study political bias in two ideologically diverse social media
forums: Gab and Twitter without the availability of human-annotated data. Our
proposed methods exploit the use of transcripts collected from political
speeches in US congress to label the data and achieve the highest accuracy of
70.5% and 65.1% in Twitter and Gab data respectively to predict political bias.
We also present a machine learning approach that combines features from
cascades and text to forecast cascade's political bias with an accuracy of
about 85%.
Related papers
- Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Modeling Political Orientation of Social Media Posts: An Extended
Analysis [0.0]
Developing machine learning models to characterize political polarization on online social media presents significant challenges.
These challenges mainly stem from various factors such as the lack of annotated data, presence of noise in social media datasets, and the sheer volume of data.
We introduce two methods that leverage on news media bias and post content to label social media posts.
We demonstrate that current machine learning models can exhibit improved performance in predicting political orientation of social media posts.
arXiv Detail & Related papers (2023-11-21T03:34:20Z) - Learning Unbiased News Article Representations: A Knowledge-Infused
Approach [0.0]
We propose a knowledge-infused deep learning model that learns unbiased representations of news articles using global and local contexts.
We show that the proposed model mitigates algorithmic political bias and outperforms baseline methods to predict the political leaning of news articles with up to 73% accuracy.
arXiv Detail & Related papers (2023-09-12T06:20:34Z) - Quantitative Analysis of Forecasting Models:In the Aspect of Online
Political Bias [0.0]
We propose a approach to classify social media posts into five distinct political leaning categories.
Our approach involves utilizing existing time series forecasting models on two social media datasets with different political ideologies.
arXiv Detail & Related papers (2023-09-11T16:17:24Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - No Word Embedding Model Is Perfect: Evaluating the Representation
Accuracy for Social Bias in the Media [17.4812995898078]
We study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles.
We collect 500k articles and review psychology literature with respect to expected social bias.
We compare how models trained with the algorithms on news articles represent the expected social bias.
arXiv Detail & Related papers (2022-11-07T15:45:52Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - Inferring Political Preferences from Twitter [0.0]
Political Sentiment Analysis of social media helps the political strategists to scrutinize the performance of a party or candidate.
During the time of elections, the social networks get flooded with blogs, chats, debates and discussions about the prospects of political parties and politicians.
In this work, we chose to identify the inclination of political opinions present in Tweets by modelling it as a text classification problem using classical machine learning.
arXiv Detail & Related papers (2020-07-21T05:20:43Z) - Measuring Social Biases of Crowd Workers using Counterfactual Queries [84.10721065676913]
Social biases based on gender, race, etc. have been shown to pollute machine learning (ML) pipeline predominantly via biased training datasets.
Crowdsourcing, a popular cost-effective measure to gather labeled training datasets, is not immune to the inherent social biases of crowd workers.
We propose a new method based on counterfactual fairness to quantify the degree of inherent social bias in each crowd worker.
arXiv Detail & Related papers (2020-04-04T21:41:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.