Situated Data, Situated Systems: A Methodology to Engage with Power
Relations in Natural Language Processing Research
- URL: http://arxiv.org/abs/2011.05911v1
- Date: Wed, 11 Nov 2020 17:04:55 GMT
- Title: Situated Data, Situated Systems: A Methodology to Engage with Power
Relations in Natural Language Processing Research
- Authors: Lucy Havens, Melissa Terras, Benjamin Bach, Beatrice Alex
- Abstract summary: We propose a bias-aware methodology to engage with power relations in natural language processing (NLP) research.
After an extensive and interdisciplinary literature review, we contribute a bias-aware methodology for NLP research.
- Score: 18.424211072825308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a bias-aware methodology to engage with power relations in natural
language processing (NLP) research. NLP research rarely engages with bias in
social contexts, limiting its ability to mitigate bias. While researchers have
recommended actions, technical methods, and documentation practices, no
methodology exists to integrate critical reflections on bias with technical NLP
methods. In this paper, after an extensive and interdisciplinary literature
review, we contribute a bias-aware methodology for NLP research. We also
contribute a definition of biased text, a discussion of the implications of
biased NLP systems, and a case study demonstrating how we are executing the
bias-aware methodology in research on archival metadata descriptions.
Related papers
- Exploring the Jungle of Bias: Political Bias Attribution in Language Models via Dependency Analysis [86.49858739347412]
Large Language Models (LLMs) have sparked intense debate regarding the prevalence of bias in these models and its mitigation.
We propose a prompt-based method for the extraction of confounding and mediating attributes which contribute to the decision process.
We find that the observed disparate treatment can at least in part be attributed to confounding and mitigating attributes and model misalignment.
arXiv Detail & Related papers (2023-11-15T00:02:25Z) - Connecting the Dots in News Analysis: Bridging the Cross-Disciplinary Disparities in Media Bias and Framing [34.41723666603066]
We argue that methodologies that are currently dominant fall short of addressing the complex questions and effects addressed in theoretical media studies.
We discuss open questions and suggest possible directions to close identified gaps between theory and predictive models, and their evaluation.
arXiv Detail & Related papers (2023-09-14T23:57:55Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Unmasking Nationality Bias: A Study of Human Perception of Nationalities
in AI-Generated Articles [10.8637226966191]
We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods.
Our study employs a two-step mixed-methods approach to identify and understand the impact of nationality bias in a text generation model.
Our findings reveal that biased NLP models tend to replicate and amplify existing societal biases, which can translate to harm if used in a sociotechnical setting.
arXiv Detail & Related papers (2023-08-08T15:46:27Z) - Application of Transformers based methods in Electronic Medical Records:
A Systematic Literature Review [77.34726150561087]
This work presents a systematic literature review of state-of-the-art advances using transformer-based methods on electronic medical records (EMRs) in different NLP tasks.
arXiv Detail & Related papers (2023-04-05T22:19:42Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic
Information Preserving [3.114945725130788]
We propose a novel methodology that leverages a causal inference framework to effectively remove gender bias.
Our comprehensive experiments show that the proposed method achieves state-of-the-art results in gender-debiasing tasks.
arXiv Detail & Related papers (2021-12-09T19:57:22Z) - Language (Technology) is Power: A Critical Survey of "Bias" in NLP [11.221552724154986]
We survey 146 papers analyzing "bias" in NLP systems.
We find that their motivations are vague, inconsistent, and lacking in normative reasoning.
We propose three recommendations that should guide work analyzing "bias" in NLP systems.
arXiv Detail & Related papers (2020-05-28T14:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.