Towards Auditing Large Language Models: Improving Text-based Stereotype
Detection
- URL: http://arxiv.org/abs/2311.14126v1
- Date: Thu, 23 Nov 2023 17:47:14 GMT
- Title: Towards Auditing Large Language Models: Improving Text-based Stereotype
Detection
- Authors: Wu Zekun, Sahan Bulathwela, Adriano Soares Koshiyama
- Abstract summary: This work introduces i) the Multi-Grain Stereotype dataset, which includes 52,751 instances of gender, race, profession and religion stereotypic text.
We design several experiments to rigorously test the proposed model trained on the novel dataset.
Experiments show that training the model in a multi-class setting can outperform the one-vs-all binary counterpart.
- Score: 5.3634450268516565
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large Language Models (LLM) have made significant advances in the recent past
becoming more mainstream in Artificial Intelligence (AI) enabled human-facing
applications. However, LLMs often generate stereotypical output inherited from
historical data, amplifying societal biases and raising ethical concerns. This
work introduces i) the Multi-Grain Stereotype Dataset, which includes 52,751
instances of gender, race, profession and religion stereotypic text and ii) a
novel stereotype classifier for English text. We design several experiments to
rigorously test the proposed model trained on the novel dataset. Our
experiments show that training the model in a multi-class setting can
outperform the one-vs-all binary counterpart. Consistent feature importance
signals from different eXplainable AI tools demonstrate that the new model
exploits relevant text features. We utilise the newly created model to assess
the stereotypic behaviour of the popular GPT family of models and observe the
reduction of bias over time. In summary, our work establishes a robust and
practical framework for auditing and evaluating the stereotypic bias in LLM.
Related papers
- REFINE-LM: Mitigating Language Model Stereotypes via Reinforcement Learning [18.064064773660174]
We introduce REFINE-LM, a debiasing method that uses reinforcement learning to handle different types of biases without any fine-tuning.
By training a simple model on top of the word probability distribution of a LM, our bias reinforcement learning method enables model debiasing without human annotations.
Experiments conducted on a wide range of models, including several LMs, show that our method significantly reduces stereotypical biases while preserving LMs performance.
arXiv Detail & Related papers (2024-08-18T14:08:31Z) - Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes [7.718858707298602]
Large language models (LLMs) have been widely integrated into production pipelines, like recruitment and recommendation systems.
This paper investigates LLMs' behavior with respect to gender stereotypes, in the context of occupation decision making.
arXiv Detail & Related papers (2024-05-06T18:09:32Z) - Auditing Large Language Models for Enhanced Text-Based Stereotype Detection and Probing-Based Bias Evaluation [4.908389661988191]
This work introduces the Multi-Grain Stereotype dataset, encompassing 51,867 instances across gender, race, profession, religion, and stereotypical text.
We explore different machine learning approaches aimed at establishing baselines for stereotype detection.
We develop a series of stereotype elicitation prompts and evaluate the presence of stereotypes in text generation tasks with popular Large Language Models.
arXiv Detail & Related papers (2024-04-02T09:31:32Z) - Evaluating Large Language Models through Gender and Racial Stereotypes [0.0]
We conduct a quality comparative study and establish a framework to evaluate language models under the premise of two kinds of biases: gender and race.
We find out that while gender bias has reduced immensely in newer models, as compared to older ones, racial bias still exists.
arXiv Detail & Related papers (2023-11-24T18:41:16Z) - CBBQ: A Chinese Bias Benchmark Dataset Curated with Human-AI
Collaboration for Large Language Models [52.25049362267279]
We present a Chinese Bias Benchmark dataset that consists of over 100K questions jointly constructed by human experts and generative language models.
The testing instances in the dataset are automatically derived from 3K+ high-quality templates manually authored with stringent quality control.
Extensive experiments demonstrate the effectiveness of the dataset in detecting model bias, with all 10 publicly available Chinese large language models exhibiting strong bias in certain categories.
arXiv Detail & Related papers (2023-06-28T14:14:44Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Estimating the Personality of White-Box Language Models [0.589889361990138]
Large-scale language models, which are trained on large corpora of text, are being used in a wide range of applications everywhere.
Existing research shows that these models can and do capture human biases.
Many of these biases, especially those that could potentially cause harm, are being well-investigated.
However, studies that infer and change human personality traits inherited by these models have been scarce or non-existent.
arXiv Detail & Related papers (2022-04-25T23:53:53Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.