BASIL: Bayesian Assessment of Sycophancy in LLMs
- URL: http://arxiv.org/abs/2508.16846v2
- Date: Fri, 17 Oct 2025 19:24:25 GMT
- Title: BASIL: Bayesian Assessment of Sycophancy in LLMs
- Authors: Katherine Atwell, Pedram Heydari, Anthony Sicilia, Malihe Alikhani,
- Abstract summary: Sycophancy is critical to understand in the context of human-AI collaboration.<n>Existing methods for studying sycophancy in LLMs are either descriptive (study behavior change when sycophancy is elicited) or normative.<n>We introduce an Bayesian framework to study the normative effects of sycophancy on rationality in LLMs.
- Score: 26.346357679861228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sycophancy (overly agreeable or flattering behavior) is critical to understand in the context of human-AI collaboration, especially in decision-making settings like health, law, and education. Existing methods for studying sycophancy in LLMs are either descriptive (study behavior change when sycophancy is elicited) or normative (provide values-based judgment on behavior change). Together, these approaches help us understand the extent, and impacts, of sycophancy. However, existing normative approaches only apply for objective tasks where ground-truth data exists, ignoring the natural subjectivity in many NLP tasks. Drawing from behavioral economics and rational decision theory, we introduce an Bayesian framework to study the normative effects of sycophancy on rationality in LLMs, without requiring labeled ground-truth. Using this interdisciplinary framework, we study sycophantic behavior in multiple LLM baselines across three different tasks, experimenting with various methods for eliciting sycophancy and obtaining probability judgments from LLMs. We find significant evidence of sycophancy in our experiments (7 of 8 baselines for one of our probing techniques), and observe that sycophancy is more likely to reduce rationality than it is to increase rationality in LLMs' decisions when they are directly probed for probabilities (2 out of 4 baselines show significant increases overall).
Related papers
- UniCog: Uncovering Cognitive Abilities of LLMs through Latent Mind Space Analysis [69.50752734049985]
A growing body of research suggests that the cognitive processes of large language models (LLMs) differ fundamentally from those of humans.<n>We propose UniCog, a unified framework that analyzes LLM cognition via a latent mind space.
arXiv Detail & Related papers (2026-01-25T16:19:00Z) - Noise, Adaptation, and Strategy: Assessing LLM Fidelity in Decision-Making [0.030586855806896043]
Large language models (LLMs) are increasingly used in social science simulations.<n>We propose a process-oriented evaluation framework to examine how LLM agents adapt under different levels of external guidance and human-derived noise.<n>We find that LLMs, by default, converge on stable and conservative strategies that diverge from observed human behaviors.
arXiv Detail & Related papers (2025-08-21T18:55:53Z) - Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts [79.1081247754018]
Large Language Models (LLMs) are widely deployed in reasoning, planning, and decision-making tasks.<n>We propose a framework based on Contact Searching Questions(CSQ) to quantify the likelihood of deception.
arXiv Detail & Related papers (2025-08-08T14:46:35Z) - When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models [11.001042171551566]
We study how user opinions induce sycophancy across different model families.<n>First-person prompts consistently induce higher sycophancy rates than third-person framings.<n>These findings highlight that sycophancy is not a surface-level artifact but emerges from a structural override of learned knowledge in deeper layers.
arXiv Detail & Related papers (2025-08-04T05:55:06Z) - WakenLLM: Evaluating Reasoning Potential and Stability in LLMs via Fine-Grained Benchmarking [14.76224690767612]
Large Language Models (LLMs) frequently output the label Unknown in reasoning tasks.<n>We introduce WakenLLM, a framework that quantifies the portion of Unknown output attributable to model incapacity.
arXiv Detail & Related papers (2025-07-22T03:21:48Z) - Truly Assessing Fluid Intelligence of Large Language Models through Dynamic Reasoning Evaluation [106.17986469245302]
Large language models (LLMs) have demonstrated impressive reasoning capacities that mirror human-like thinking.<n>Existing reasoning benchmarks either focus on domain-specific knowledge (crystallized intelligence) or lack interpretability.<n>We propose DRE-Bench, a dynamic reasoning evaluation benchmark grounded in a hierarchical cognitive framework.
arXiv Detail & Related papers (2025-06-03T09:01:08Z) - Arbiters of Ambivalence: Challenges of Using LLMs in No-Consensus Tasks [52.098988739649705]
This study examines the biases and limitations of LLMs in three roles: answer generator, judge, and debater.<n>We develop a no-consensus'' benchmark by curating examples that encompass a variety of a priori ambivalent scenarios.<n>Our results show that while LLMs can provide nuanced assessments when generating open-ended answers, they tend to take a stance on no-consensus topics when employed as judges or debaters.
arXiv Detail & Related papers (2025-05-28T01:31:54Z) - Are Language Models Consequentialist or Deontological Moral Reasoners? [75.6788742799773]
We focus on a large-scale analysis of the moral reasoning traces provided by large language models (LLMs)<n>We introduce and test a taxonomy of moral rationales to systematically classify reasoning traces according to two main normative ethical theories: consequentialism and deontology.
arXiv Detail & Related papers (2025-05-27T17:51:18Z) - Enough Coin Flips Can Make LLMs Act Bayesian [71.79085204454039]
Large language models (LLMs) exhibit the ability to generalize given few-shot examples in their input prompt, an emergent capability known as in-context learning (ICL)<n>We investigate whether LLMs use ICL to perform structured reasoning in ways that are consistent with a Bayesian framework or rely on pattern matching.
arXiv Detail & Related papers (2025-03-06T18:59:23Z) - MaxSup: Overcoming Representation Collapse in Label Smoothing [52.66247931969715]
Label Smoothing (LS) is widely adopted to reduce overconfidence in neural network predictions.<n>LS compacts feature representations into overly tight clusters, diluting intra-class diversity.<n>We propose Max Suppression (MaxSup), which applies uniform regularization to both correct and incorrect predictions.
arXiv Detail & Related papers (2025-02-18T20:10:34Z) - Chat Bankman-Fried: an Exploration of LLM Alignment in Finance [4.892013668424246]
As jurisdictions enact legislation on AI safety, the concept of alignment must be defined and measured.<n>This paper proposes an experimental framework to assess whether large language models (LLMs) adhere to ethical and legal standards in the relatively unexplored context of finance.
arXiv Detail & Related papers (2024-11-01T08:56:17Z) - Accounting for Sycophancy in Language Model Uncertainty Estimation [28.08509288774144]
We study the relationship between sycophancy and uncertainty estimation for the first time.
We show that user confidence plays a critical role in modulating the effects of sycophancy.
We argue that externalizing both model and user uncertainty can help to mitigate the impacts of sycophancy bias.
arXiv Detail & Related papers (2024-10-17T18:00:25Z) - Metacognitive Myopia in Large Language Models [0.0]
Large Language Models (LLMs) exhibit potentially harmful biases that reinforce culturally inherent stereotypes, cloud moral judgments, or amplify positive evaluations of majority groups.
We propose metacognitive myopia as a cognitive-ecological framework that can account for a conglomerate of established and emerging LLM biases.
Our theoretical framework posits that a lack of the two components of metacognition, monitoring and control, causes five symptoms of metacognitive myopia in LLMs.
arXiv Detail & Related papers (2024-08-10T14:43:57Z) - Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models [79.76293901420146]
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial.
Our research investigates the fragility of uncertainty estimation and explores potential attacks.
We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output.
arXiv Detail & Related papers (2024-07-15T23:41:11Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context [5.361970694197912]
This paper proposes a framework, grounded in behavioral economics, to evaluate the decision-making behaviors of large language models (LLMs)
We estimate the degree of risk preference, probability weighting, and loss aversion in a context-free setting for three commercial LLMs: ChatGPT-4.0-Turbo, Claude-3-Opus, and Gemini-1.0-pro.
Our results reveal that LLMs generally exhibit patterns similar to humans, such as risk aversion and loss aversion, with a tendency to overweight small probabilities.
arXiv Detail & Related papers (2024-06-10T02:14:19Z) - Explaining Large Language Models Decisions Using Shapley Values [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.