Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification
- URL: http://arxiv.org/abs/2411.04328v1
- Date: Thu, 07 Nov 2024 00:09:18 GMT
- Title: Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification
- Authors: Manuel Nunez Martinez, Sonja Schmer-Galunder, Zoey Liu, Sangpil Youm, Chathuri Jayaweera, Bonnie J. Dorr,
- Abstract summary: The study highlights the sensitivity of modern self-learning systems to unconstrained data ingestion.
Applying both models to left-leaning (CNN) and right-leaning (FOX) news articles, we assess their effectiveness on data beyond the original training and test sets.
We contrast the opaque architecture of a deep learning model with the transparency of a linguistically informed rule-based model.
- Score: 5.550237524713089
- License:
- Abstract: The unchecked spread of digital information, combined with increasing political polarization and the tendency of individuals to isolate themselves from opposing political viewpoints, has driven researchers to develop systems for automatically detecting political bias in media. This trend has been further fueled by discussions on social media. We explore methods for categorizing bias in US news articles, comparing rule-based and deep learning approaches. The study highlights the sensitivity of modern self-learning systems to unconstrained data ingestion, while reconsidering the strengths of traditional rule-based systems. Applying both models to left-leaning (CNN) and right-leaning (FOX) news articles, we assess their effectiveness on data beyond the original training and test sets.This analysis highlights each model's accuracy, offers a framework for exploring deep-learning explainability, and sheds light on political bias in US news media. We contrast the opaque architecture of a deep learning model with the transparency of a linguistically informed rule-based model, showing that the rule-based model performs consistently across different data conditions and offers greater transparency, whereas the deep learning model is dependent on the training set and struggles with unseen data.
Related papers
- On the Relationship between Truth and Political Bias in Language Models [22.57096615768638]
We focus on analyzing the relationship between two concepts essential in both language model alignment and political science.
We train reward models on various popular truthfulness datasets and evaluate their political bias.
Our findings reveal that optimizing reward models for truthfulness on these datasets tends to result in a left-leaning political bias.
arXiv Detail & Related papers (2024-09-09T02:28:53Z) - High Risk of Political Bias in Black Box Emotion Inference Models [0.0]
This paper investigates the presence of political bias in machine learning models used for sentiment analysis (SA) in social science research.
We conducted a bias audit on a Polish sentiment analysis model developed in our lab.
Our findings indicate that annotations by human raters propagate political biases into the model's predictions.
arXiv Detail & Related papers (2024-07-18T20:31:07Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning Unbiased News Article Representations: A Knowledge-Infused
Approach [0.0]
We propose a knowledge-infused deep learning model that learns unbiased representations of news articles using global and local contexts.
We show that the proposed model mitigates algorithmic political bias and outperforms baseline methods to predict the political leaning of news articles with up to 73% accuracy.
arXiv Detail & Related papers (2023-09-12T06:20:34Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Mitigating Political Bias in Language Models Through Reinforced
Calibration [6.964628305312507]
We describe metrics for measuring political bias in GPT-2 generation.
We propose a reinforcement learning (RL) framework for mitigating political biases in generated text.
arXiv Detail & Related papers (2021-04-30T07:21:30Z) - Representative & Fair Synthetic Data [68.8204255655161]
We present a framework to incorporate fairness constraints into the self-supervised learning process.
We generate a representative as well as fair version of the UCI Adult census data set.
We consider representative & fair synthetic data a promising future building block to teach algorithms not on historic worlds, but rather on the worlds that we strive to live in.
arXiv Detail & Related papers (2021-04-07T09:19:46Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Inflating Topic Relevance with Ideology: A Case Study of Political
Ideology Bias in Social Topic Detection Models [16.279854003220418]
We investigate the impact of political ideology biases in training data.
Our work highlights the susceptibility of large, complex models to propagating the biases from human-selected input.
As a way to mitigate the bias, we propose to learn a text representation that is invariant to political ideology while still judging topic relevance.
arXiv Detail & Related papers (2020-11-29T05:54:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.