"Im not Racist but...": Discovering Bias in the Internal Knowledge of
Large Language Models
- URL: http://arxiv.org/abs/2310.08780v1
- Date: Fri, 13 Oct 2023 00:03:37 GMT
- Title: "Im not Racist but...": Discovering Bias in the Internal Knowledge of
Large Language Models
- Authors: Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter
- Abstract summary: We introduce a novel, purely prompt-based approach to uncover hidden stereotypes within large language models.
Our work contributes to advancing transparency and promoting fairness in natural language processing systems.
- Score: 6.21188983355735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have garnered significant attention for their
remarkable performance in a continuously expanding set of natural language
processing tasks. However, these models have been shown to harbor inherent
societal biases, or stereotypes, which can adversely affect their performance
in their many downstream applications. In this paper, we introduce a novel,
purely prompt-based approach to uncover hidden stereotypes within any arbitrary
LLM. Our approach dynamically generates a knowledge representation of internal
stereotypes, enabling the identification of biases encoded within the LLM's
internal knowledge. By illuminating the biases present in LLMs and offering a
systematic methodology for their analysis, our work contributes to advancing
transparency and promoting fairness in natural language processing systems.
Related papers
- Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation [0.0]
Large Language Models (LLMs) have revolutionized artificial intelligence, demonstrating remarkable computational power and linguistic capabilities.
These models are inherently prone to various biases stemming from their training data.
This study explores the presence of these biases within the responses given by the most recent LLMs, analyzing the impact on their fairness and reliability.
arXiv Detail & Related papers (2024-07-11T12:30:19Z) - The African Woman is Rhythmic and Soulful: An Investigation of Implicit Biases in LLM Open-ended Text Generation [3.9945212716333063]
Implicit biases are significant because they influence the decisions made by Large Language Models (LLMs)
Traditionally, explicit bias tests or embedding-based methods are employed to detect bias, but these approaches can overlook more nuanced, implicit forms of bias.
We introduce two novel psychological-inspired methodologies to reveal and measure implicit biases through prompt-based and decision-making tasks.
arXiv Detail & Related papers (2024-07-01T13:21:33Z) - LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements [59.71218039095155]
Task of reading comprehension (RC) provides a primary means to assess language models' natural language understanding (NLU) capabilities.
If the context aligns with the models' internal knowledge, it is hard to discern whether the models' answers stem from context comprehension or from internal information.
To address this issue, we suggest to use RC on imaginary data, based on fictitious facts and entities.
arXiv Detail & Related papers (2024-04-09T13:08:56Z) - Towards Uncovering How Large Language Model Works: An Explainability Perspective [38.07611356855978]
Large language models (LLMs) have led to breakthroughs in language tasks, yet the internal mechanisms that enable their remarkable generalization and reasoning abilities remain opaque.
This paper aims to uncover the mechanisms underlying LLM functionality through the lens of explainability.
arXiv Detail & Related papers (2024-02-16T13:46:06Z) - Self-Debiasing Large Language Models: Zero-Shot Recognition and
Reduction of Stereotypes [73.12947922129261]
We leverage the zero-shot capabilities of large language models to reduce stereotyping.
We show that self-debiasing can significantly reduce the degree of stereotyping across nine different social groups.
We hope this work opens inquiry into other zero-shot techniques for bias mitigation.
arXiv Detail & Related papers (2024-02-03T01:40:11Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Sparsity-Guided Holistic Explanation for LLMs with Interpretable
Inference-Time Intervention [53.896974148579346]
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
The enigmatic black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications.
We propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs.
arXiv Detail & Related papers (2023-12-22T19:55:58Z) - The Quo Vadis of the Relationship between Language and Large Language
Models [3.10770247120758]
Large Language Models (LLMs) have come to encourage the adoption of LLMs as scientific models of language.
We identify the most important theoretical and empirical risks brought about by the adoption of scientific models that lack transparency.
We conclude that, at their current stage of development, LLMs hardly offer any explanations for language.
arXiv Detail & Related papers (2023-10-17T10:54:24Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Knowledge Rumination for Pre-trained Language Models [77.55888291165462]
We propose a new paradigm dubbed Knowledge Rumination to help the pre-trained language model utilize related latent knowledge without retrieving it from the external corpus.
We apply the proposed knowledge rumination to various language models, including RoBERTa, DeBERTa, and GPT-3.
arXiv Detail & Related papers (2023-05-15T15:47:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.