Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing
- URL: http://arxiv.org/abs/2309.08069v2
- Date: Wed, 19 Jun 2024 06:35:13 GMT
- Title: Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing
- Authors: Gisela Vallejo, Timothy Baldwin, Lea Frermann,
- Abstract summary: We argue that methodologies that are currently dominant fall short of addressing the complex questions and effects addressed in theoretical media studies.
We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation.
- Score: 34.41723666603066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The manifestation and effect of bias in news reporting have been central topics in the social sciences for decades, and have received increasing attention in the NLP community recently. While NLP can help to scale up analyses or contribute automatic procedures to investigate the impact of biased news in society, we argue that methodologies that are currently dominant fall short of addressing the complex questions and effects addressed in theoretical media studies. In this survey paper, we review social science approaches and draw a comparison with typical task formulations, methods, and evaluation metrics used in the analysis of media bias in NLP. We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation. These include model transparency, considering document-external information, and cross-document reasoning rather than single-label assignment.
Related papers
- Intervention strategies for misinformation sharing on social media: A bibliometric analysis [1.8020166013859684]
Inaccurate shared information causes confusion, can adversely affect mental health, and can lead to mis-informed decision-making.
This study explores the typology of intervention strategies for addressing misinformation sharing on social media.
It identifies 4 important clusters - cognition-based, automated-based, information-based, and hybrid-based.
arXiv Detail & Related papers (2024-09-26T08:38:15Z) - Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - Who Blames or Endorses Whom? Entity-to-Entity Directed Sentiment
Extraction in News Text [4.218255132083181]
We propose a novel NLP task of identifying directed sentiment relationship between political entities from a given news document.
From a million-scale news corpus, we construct a dataset of news sentences where sentiment relations of political entities are manually annotated.
We demonstrate the utility of our proposed method for social science research questions by analyzing positive and negative opinions between political entities in two major events: 2016 U.S. presidential election and COVID-19.
arXiv Detail & Related papers (2021-06-02T09:02:14Z) - Situated Data, Situated Systems: A Methodology to Engage with Power
Relations in Natural Language Processing Research [18.424211072825308]
We propose a bias-aware methodology to engage with power relations in natural language processing (NLP) research.
After an extensive and interdisciplinary literature review, we contribute a bias-aware methodology for NLP research.
arXiv Detail & Related papers (2020-11-11T17:04:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.