Tackling Bias in Pre-trained Language Models: Current Trends and
Under-represented Societies
- URL: http://arxiv.org/abs/2312.01509v1
- Date: Sun, 3 Dec 2023 21:25:10 GMT
- Title: Tackling Bias in Pre-trained Language Models: Current Trends and
Under-represented Societies
- Authors: Vithya Yogarajan, Gillian Dobbie, Te Taka Keegan, Rostam J. Neuwirth
- Abstract summary: This research presents a survey synthesising the current trends and limitations in techniques used for identifying and mitigating bias in language models.
We argue that current practices tackling the bias problem cannot simply be 'plugged in' to address the needs of under-represented societies.
- Score: 6.831519625084861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The benefits and capabilities of pre-trained language models (LLMs) in
current and future innovations are vital to any society. However, introducing
and using LLMs comes with biases and discrimination, resulting in concerns
about equality, diversity and fairness, and must be addressed. While
understanding and acknowledging bias in LLMs and developing mitigation
strategies are crucial, the generalised assumptions towards societal needs can
result in disadvantages towards under-represented societies and indigenous
populations. Furthermore, the ongoing changes to actual and proposed amendments
to regulations and laws worldwide also impact research capabilities in tackling
the bias problem. This research presents a comprehensive survey synthesising
the current trends and limitations in techniques used for identifying and
mitigating bias in LLMs, where the overview of methods for tackling bias are
grouped into metrics, benchmark datasets, and mitigation strategies. The
importance and novelty of this survey are that it explores the perspective of
under-represented societies. We argue that current practices tackling the bias
problem cannot simply be 'plugged in' to address the needs of under-represented
societies. We use examples from New Zealand to present requirements for
adopting existing techniques to under-represented societies.
Related papers
- A Comprehensive Survey of Bias in LLMs: Current Landscape and Future Directions [0.0]
Large Language Models (LLMs) have revolutionized various applications in natural language processing (NLP) by providing unprecedented text generation, translation, and comprehension capabilities.
Their widespread deployment has brought to light significant concerns regarding biases embedded within these models.
This paper presents a comprehensive survey of biases in LLMs, aiming to provide an extensive review of the types, sources, impacts, and mitigation strategies related to these biases.
arXiv Detail & Related papers (2024-09-24T19:50:38Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - Bias and Unfairness in Information Retrieval Systems: New Challenges in the LLM Era [31.199796752545478]
Information retrieval systems, such as search engines and recommender systems, have undergone a significant paradigm shift.
With the rapid advancements of large language models (LLMs), information retrieval systems, such as search engines and recommender systems, have undergone a significant paradigm shift.
arXiv Detail & Related papers (2024-04-17T15:05:03Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - Challenges in Annotating Datasets to Quantify Bias in Under-represented
Society [7.9342597513806865]
Benchmark bias datasets have been developed for binary gender classification and ethical/racial considerations.
Motivated by the lack of annotated datasets for quantifying bias in under-represented societies, we created benchmark datasets for the New Zealand (NZ) population.
This research outlines the manual annotation process, provides an overview of the challenges we encountered and lessons learnt, and presents recommendations for future research.
arXiv Detail & Related papers (2023-09-11T22:24:39Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.