Understanding Prior Bias and Choice Paralysis in Transformer-based
Language Representation Models through Four Experimental Probes
- URL: http://arxiv.org/abs/2210.01258v1
- Date: Mon, 3 Oct 2022 22:36:44 GMT
- Title: Understanding Prior Bias and Choice Paralysis in Transformer-based
Language Representation Models through Four Experimental Probes
- Authors: Ke Shen, Mayank Kejriwal
- Abstract summary: We present four confusion probes to test for problems such as prior bias and choice paralysis.
We show that the model exhibits significant prior bias and to a lesser, but still highly significant degree, choice paralysis, in addition to other problems.
Our results suggest that stronger testing protocols and additional benchmarks may be necessary before the language models are used in front-facing systems or decision making with real world consequences.
- Score: 8.591839265985412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work on transformer-based neural networks has led to impressive
advances on multiple-choice natural language understanding (NLU) problems, such
as Question Answering (QA) and abductive reasoning. Despite these advances,
there is limited work still on understanding whether these models respond to
perturbed multiple-choice instances in a sufficiently robust manner that would
allow them to be trusted in real-world situations. We present four confusion
probes, inspired by similar phenomena first identified in the behavioral
science community, to test for problems such as prior bias and choice
paralysis. Experimentally, we probe a widely used transformer-based
multiple-choice NLU system using four established benchmark datasets. Here we
show that the model exhibits significant prior bias and to a lesser, but still
highly significant degree, choice paralysis, in addition to other problems. Our
results suggest that stronger testing protocols and additional benchmarks may
be necessary before the language models are used in front-facing systems or
decision making with real world consequences.
Related papers
- QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios [15.193544498311603]
We present QUITE, a dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships.
We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types.
Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning.
arXiv Detail & Related papers (2024-10-14T12:44:59Z) - HANS, are you clever? Clever Hans Effect Analysis of Neural Systems [1.6267479602370545]
Large Language Models (It-LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively.
Several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models' abilities.
However, earlier works are demonstrating the presence of inherent "order bias" in It-LLMs, posing challenges to the appropriate evaluation.
arXiv Detail & Related papers (2023-09-21T20:52:18Z) - A Simple yet Effective Self-Debiasing Framework for Transformer Models [49.09053367249642]
Current Transformer-based natural language understanding (NLU) models heavily rely on dataset biases.
We propose a simple yet effective self-debiasing framework for Transformer-based NLU models.
arXiv Detail & Related papers (2023-06-02T20:31:58Z) - Exposing Attention Glitches with Flip-Flop Language Modeling [55.0688535574859]
This work identifies and analyzes the phenomenon of attention glitches in large language models.
We introduce flip-flop language modeling (FFLM), a family of synthetic benchmarks designed to probe the extrapolative behavior of neural language models.
We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques.
arXiv Detail & Related papers (2023-06-01T17:44:35Z) - All Roads Lead to Rome? Exploring the Invariance of Transformers'
Representations [69.3461199976959]
We propose a model based on invertible neural networks, BERT-INN, to learn the Bijection Hypothesis.
We show the advantage of BERT-INN both theoretically and through extensive experiments.
arXiv Detail & Related papers (2023-05-23T22:30:43Z) - Pushing the Limits of Rule Reasoning in Transformers through Natural
Language Satisfiability [30.01308882849197]
We propose a new methodology for creating challenging algorithmic reasoning datasets.
Key idea is to draw insights from empirical sampling of hard propositional SAT problems and from complexity-theoretic studies of language.
We find that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems.
arXiv Detail & Related papers (2021-12-16T17:47:20Z) - Unnatural Language Inference [48.45003475966808]
We find that state-of-the-art NLI models, such as RoBERTa and BART, are invariant to, and sometimes even perform better on, examples with randomly reordered words.
Our findings call into question the idea that our natural language understanding models, and the tasks used for measuring their progress, genuinely require a human-like understanding of syntax.
arXiv Detail & Related papers (2020-12-30T20:40:48Z) - ABNIRML: Analyzing the Behavior of Neural IR Models [45.74073795558624]
Pretrained language models such as BERT and T5 have established a new state-of-the-art for ad-hoc search.
We present a new comprehensive framework for Analyzing the Behavior of Neural IR ModeLs (ABNIRML)
We conduct an empirical study that yields insights into the factors that contribute to the neural model's gains.
arXiv Detail & Related papers (2020-11-02T03:07:38Z) - UnQovering Stereotyping Biases via Underspecified Questions [68.81749777034409]
We present UNQOVER, a framework to probe and quantify biases through underspecified questions.
We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors.
We use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion.
arXiv Detail & Related papers (2020-10-06T01:49:52Z) - Is Supervised Syntactic Parsing Beneficial for Language Understanding?
An Empirical Investigation [71.70562795158625]
Traditional NLP has long held (supervised) syntactic parsing necessary for successful higher-level semantic language understanding (LU)
Recent advent of end-to-end neural models, self-supervised via language modeling (LM), and their success on a wide range of LU tasks, questions this belief.
We empirically investigate the usefulness of supervised parsing for semantic LU in the context of LM-pretrained transformer networks.
arXiv Detail & Related papers (2020-08-15T21:03:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.