Acquiescence Bias in Large Language Models
- URL: http://arxiv.org/abs/2509.08480v1
- Date: Wed, 10 Sep 2025 10:39:24 GMT
- Title: Acquiescence Bias in Large Language Models
- Authors: Daniel Braun,
- Abstract summary: Acquiescence bias is the tendency of humans to agree with statements in surveys, independent of their actual beliefs.<n>Large Language Models (LLMs) have been shown to be very influenceable by relatively small changes in input.<n>We present a study investigating the presence of acquiescence bias in LLMs across different models, tasks, and languages.
- Score: 1.6498361958317636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Acquiescence bias, i.e. the tendency of humans to agree with statements in surveys, independent of their actual beliefs, is well researched and documented. Since Large Language Models (LLMs) have been shown to be very influenceable by relatively small changes in input and are trained on human-generated data, it is reasonable to assume that they could show a similar tendency. We present a study investigating the presence of acquiescence bias in LLMs across different models, tasks, and languages (English, German, and Polish). Our results indicate that, contrary to humans, LLMs display a bias towards answering no, regardless of whether it indicates agreement or disagreement.
Related papers
- Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models [45.41676783204022]
We investigate various proxy measures of bias in large language models (LLMs)<n>We find that evaluating models with pre-prompted personae on a multi-subject benchmark (MMLU) leads to negligible and mostly random differences in scores.<n>With the recent trend for LLM assistant memory and personalization, these problems open up from a different angle.
arXiv Detail & Related papers (2025-06-12T08:47:40Z) - Delving into Multilingual Ethical Bias: The MSQAD with Statistical Hypothesis Tests for Large Language Models [7.480124826347168]
This paper investigates the validation and comparison of the ethical biases of LLMs concerning globally discussed and potentially sensitive topics.<n>We collected news articles from Human Rights Watch covering 17 topics, and generated socially sensitive questions along with corresponding responses in multiple languages.<n>We scrutinized the biases of these responses across languages and topics, employing two statistical hypothesis tests.
arXiv Detail & Related papers (2025-05-25T12:25:44Z) - Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection [18.625071242029936]
Large Language Models (LLMs) have been shown to exhibit various biases and stereotypes in their generated content.<n>This paper presents a systematic framework to investigate and compare explicit and implicit biases in LLMs.
arXiv Detail & Related papers (2025-01-04T14:08:52Z) - How far can bias go? -- Tracing bias from pretraining data to alignment [54.51310112013655]
This study examines the correlation between gender-occupation bias in pre-training data and their manifestation in LLMs.<n>Our findings reveal that biases present in pre-training data are amplified in model outputs.
arXiv Detail & Related papers (2024-11-28T16:20:25Z) - Bias in the Mirror: Are LLMs opinions robust to their own adversarial attacks ? [22.0383367888756]
Large language models (LLMs) inherit biases from their training data and alignment processes, influencing their responses in subtle ways.
We introduce a novel approach where two instances of an LLM engage in self-debate, arguing opposing viewpoints to persuade a neutral version of the model.
We evaluate how firmly biases hold and whether models are susceptible to reinforcing misinformation or shifting to harmful viewpoints.
arXiv Detail & Related papers (2024-10-17T13:06:02Z) - Bias Similarity Across Large Language Models [32.0365189539138]
Bias in Large Language Models remains a critical concern as these systems are increasingly deployed in high-stakes applications.<n>We evaluate bias similarity as a form of functional similarity and evaluate 24 LLMs on over one million structured prompts spanning four bias dimensions.<n>Our findings uncover that fairness is not strongly determined by model size, architecture, instruction tuning, or openness.
arXiv Detail & Related papers (2024-10-15T19:21:14Z) - Bias in LLMs as Annotators: The Effect of Party Cues on Labelling Decision by Large Language Models [0.0]
We test similar biases in Large Language Models (LLMs) as annotators.
Unlike humans, who are only biased when faced with statements from extreme parties, LLMs exhibit significant bias even when prompted with statements from center-left and center-right parties.
arXiv Detail & Related papers (2024-08-28T16:05:20Z) - Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - What Do Llamas Really Think? Revealing Preference Biases in Language
Model Representations [62.91799637259657]
Do large language models (LLMs) exhibit sociodemographic biases, even when they decline to respond?
We study this research question by probing contextualized embeddings and exploring whether this bias is encoded in its latent representations.
We propose a logistic Bradley-Terry probe which predicts word pair preferences of LLMs from the words' hidden vectors.
arXiv Detail & Related papers (2023-11-30T18:53:13Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - UnQovering Stereotyping Biases via Underspecified Questions [68.81749777034409]
We present UNQOVER, a framework to probe and quantify biases through underspecified questions.
We show that a naive use of model scores can lead to incorrect bias estimates due to two forms of reasoning errors.
We use this metric to analyze four important classes of stereotypes: gender, nationality, ethnicity, and religion.
arXiv Detail & Related papers (2020-10-06T01:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.