Quantitative Analysis of Forecasting Models:In the Aspect of Online
Political Bias
- URL: http://arxiv.org/abs/2309.05589v2
- Date: Tue, 19 Sep 2023 04:55:26 GMT
- Title: Quantitative Analysis of Forecasting Models:In the Aspect of Online
Political Bias
- Authors: Srinath Sai Tripuraneni, Sadia Kamal, Arunkumar Bagavathi
- Abstract summary: We propose a approach to classify social media posts into five distinct political leaning categories.
Our approach involves utilizing existing time series forecasting models on two social media datasets with different political ideologies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding and mitigating political bias in online social media platforms
are crucial tasks to combat misinformation and echo chamber effects. However,
characterizing political bias temporally using computational methods presents
challenges due to the high frequency of noise in social media datasets. While
existing research has explored various approaches to political bias
characterization, the ability to forecast political bias and anticipate how
political conversations might evolve in the near future has not been
extensively studied. In this paper, we propose a heuristic approach to classify
social media posts into five distinct political leaning categories. Since there
is a lack of prior work on forecasting political bias, we conduct an in-depth
analysis of existing baseline models to identify which model best fits to
forecast political leaning time series. Our approach involves utilizing
existing time series forecasting models on two social media datasets with
different political ideologies, specifically Twitter and Gab. Through our
experiments and analyses, we seek to shed light on the challenges and
opportunities in forecasting political bias in social media platforms.
Ultimately, our work aims to pave the way for developing more effective
strategies to mitigate the negative impact of political bias in the digital
realm.
Related papers
- Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Uncovering Political Bias in Emotion Inference Models: Implications for sentiment analysis in social science research [0.0]
This paper investigates the presence of political bias in machine learning models used for sentiment analysis (SA) in social science research.
We conducted a bias audit on a Polish sentiment analysis model developed in our lab.
Our findings indicate that annotations by human raters propagate political biases into the model's predictions.
arXiv Detail & Related papers (2024-07-18T20:31:07Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - Modeling Political Orientation of Social Media Posts: An Extended
Analysis [0.0]
Developing machine learning models to characterize political polarization on online social media presents significant challenges.
These challenges mainly stem from various factors such as the lack of annotated data, presence of noise in social media datasets, and the sheer volume of data.
We introduce two methods that leverage on news media bias and post content to label social media posts.
We demonstrate that current machine learning models can exhibit improved performance in predicting political orientation of social media posts.
arXiv Detail & Related papers (2023-11-21T03:34:20Z) - Inducing Political Bias Allows Language Models Anticipate Partisan
Reactions to Controversies [5.958974943807783]
This study addresses the challenge of understanding political bias in digitized discourse using Large Language Models (LLMs)
We present a comprehensive analytical framework, consisting of Partisan Bias Divergence Assessment and Partisan Class Tendency Prediction.
Our findings reveal the model's effectiveness in capturing emotional and moral nuances, albeit with some challenges in stance detection.
arXiv Detail & Related papers (2023-11-16T08:57:53Z) - Social media polarization reflects shifting political alliances in
Pakistan [44.99833362998488]
Spanning from 2018 to 2022, our analysis of Twitter data allows us to capture pivotal shifts and developments in Pakistan's political arena.
By examining interactions and content generated by politicians affiliated with major political parties, we reveal a consistent and active presence of politicians on Twitter.
Our analysis also uncovers significant shifts in political affiliations, including the transition of politicians to the opposition alliance.
arXiv Detail & Related papers (2023-09-15T00:07:48Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias [54.89737992911079]
We propose a new task, a neutral summary generation from multiple news headlines of the varying political spectrum.
One of the most interesting observations is that generation models can hallucinate not only factually inaccurate or unverifiable content, but also politically biased content.
arXiv Detail & Related papers (2022-04-11T07:06:01Z) - A Machine Learning Pipeline to Examine Political Bias with Congressional
Speeches [0.3062386594262859]
We give machine learning approaches to study political bias in two ideologically diverse social media forums: Gab and Twitter.
Our proposed methods exploit the use of transcripts collected from political speeches in US congress to label the data.
We also present a machine learning approach that combines features from cascades and text to forecast cascade's political bias with an accuracy of about 85%.
arXiv Detail & Related papers (2021-09-18T21:15:21Z) - Encoding Heterogeneous Social and Political Context for Entity Stance
Prediction [7.477393857078695]
We propose the novel task of entity stance prediction.
We retrieve facts from Wikipedia about social entities regarding contemporary U.S. politics.
We then annotate social entities' stances towards political ideologies with the help of domain experts.
arXiv Detail & Related papers (2021-08-09T08:59:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.