Evaluating and Understanding Scheming Propensity in LLM Agents
- URL: http://arxiv.org/abs/2603.01608v1
- Date: Mon, 02 Mar 2026 08:38:40 GMT
- Title: Evaluating and Understanding Scheming Propensity in LLM Agents
- Authors: Mia Hopman, Jannes Elstner, Maria Avramidou, Amritanshu Prasad, David Lindner,
- Abstract summary: We decompose scheming incentives into agent factors and environmental factors.<n>We find minimal instances of scheming despite high environmental incentives, and show this is unlikely due to evaluation awareness.
- Score: 4.5440569375419715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As frontier language models are increasingly deployed as autonomous agents pursuing complex, long-term objectives, there is increased risk of scheming: agents covertly pursuing misaligned goals. Prior work has focused on showing agents are capable of scheming, but their propensity to scheme in realistic scenarios remains underexplored. To understand when agents scheme, we decompose scheming incentives into agent factors and environmental factors. We develop realistic settings allowing us to systematically vary these factors, each with scheming opportunities for agents that pursue instrumentally convergent goals such as self-preservation, resource acquisition, and goal-guarding. We find only minimal instances of scheming despite high environmental incentives, and show this is unlikely due to evaluation awareness. While inserting adversarially-designed prompt snippets that encourage agency and goal-directedness into an agent's system prompt can induce high scheming rates, snippets used in real agent scaffolds rarely do. Surprisingly, in model organisms (Hubinger et al., 2023) built with these snippets, scheming behavior is remarkably brittle: removing a single tool can drop the scheming rate from 59% to 3%, and increasing oversight can raise rather than deter scheming by up to 25%. Our incentive decomposition enables systematic measurement of scheming propensity in settings relevant for deployment, which is necessary as agents are entrusted with increasingly consequential tasks.
Related papers
- The Why Behind the Action: Unveiling Internal Drivers via Agentic Attribution [63.61358761489141]
Large Language Model (LLM)-based agents are widely used in real-world applications such as customer service, web navigation, and software engineering.<n>We propose a novel framework for textbfgeneral agentic attribution, designed to identify the internal factors driving agent actions regardless of the task outcome.<n>We validate our framework across a diverse suite of agentic scenarios, including standard tool use and subtle reliability risks like memory-induced bias.
arXiv Detail & Related papers (2026-01-21T15:22:21Z) - From Biased Chatbots to Biased Agents: Examining Role Assignment Effects on LLM Agent Robustness [5.572574491501413]
Large Language Models (LLMs) are increasingly deployed as autonomous agents capable of actions with real-world impacts beyond text generation.<n>While persona-induced biases in text generation are well documented, their effects on agent task performance remain largely unexplored.<n>We present the first systematic case study showing that demographic-based persona assignments can alter LLM agents' behavior and degrade performance across diverse domains.
arXiv Detail & Related papers (2026-01-21T02:43:07Z) - Are Your Agents Upward Deceivers? [73.1073084327614]
Large Language Model (LLM)-based agents are increasingly used as autonomous subordinates that carry out tasks for users.<n>This raises the question of whether they may also engage in deception, similar to how individuals in human organizations lie to superiors to create a good image or avoid punishment.<n>We observe and define agentic upward deception, a phenomenon in which an agent facing environmental constraints conceals its failure and performs actions that were not requested without reporting.
arXiv Detail & Related papers (2025-12-04T14:47:05Z) - AgentMisalignment: Measuring the Propensity for Misaligned Behaviour in LLM-Based Agents [0.0]
Large Language Model (LLM) agents become more widespread, associated misalignment risks increase.<n>In this work, we approach misalignment as a conflict between the internal goals pursued by the model and the goals intended by its deployer.<n>We introduce a misalignment propensity benchmark, textscAgentMisalignment, a benchmark suite designed to evaluate the propensity of LLM agents to misalign in realistic scenarios.
arXiv Detail & Related papers (2025-06-04T14:46:47Z) - Technical Report: Evaluating Goal Drift in Language Model Agents [0.05567007955507388]
This paper proposes a novel approach to analyzing goal drift in language models (LMs)<n>In our experiments, agents are first explicitly given a goal through their system prompt, then exposed to competing objectives through environmental pressures.<n>We find that goal drift correlates with models' increasing susceptibility to pattern-matching behaviors as the context length grows.
arXiv Detail & Related papers (2025-05-05T15:06:09Z) - Steering No-Regret Agents in MFGs under Model Uncertainty [19.845081182511713]
We study the design of steering rewards in Mean-Field Games with density-independent transitions.<n>We establish sub-linear regret guarantees for the cumulative gaps between the agents' behaviors and the desired ones.<n>Our work presents an effective framework for steering agents behaviors in large-population systems under uncertainty.
arXiv Detail & Related papers (2025-03-12T12:02:02Z) - Agent-as-a-Judge: Evaluate Agents with Agents [61.33974108405561]
We introduce the Agent-as-a-Judge framework, wherein agentic systems are used to evaluate agentic systems.
This is an organic extension of the LLM-as-a-Judge framework, incorporating agentic features that enable intermediate feedback for the entire task-solving process.
We present DevAI, a new benchmark of 55 realistic automated AI development tasks.
arXiv Detail & Related papers (2024-10-14T17:57:02Z) - Rejecting Hallucinated State Targets during Planning [84.179112256683]
In planning processes, generative or predictive models are often used to propose "targets" representing sets of expected or desirable states.<n>Unfortunately, learned models inevitably hallucinate infeasible targets that can cause delusional behaviors and safety concerns.<n>We devise a strategy to identify and reject infeasible targets by learning a target feasibility evaluator.
arXiv Detail & Related papers (2024-10-09T17:35:25Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [112.04307762405669]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.<n>G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - Dissecting Adversarial Robustness of Multimodal LM Agents [70.2077308846307]
We manually create 200 targeted adversarial tasks and evaluation scripts in a realistic threat model on top of VisualWebArena.<n>We find that we can successfully break latest agents that use black-box frontier LMs, including those that perform reflection and tree search.<n>We also use ARE to rigorously evaluate how the robustness changes as new components are added.
arXiv Detail & Related papers (2024-06-18T17:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.