Unintended Misalignment from Agentic Fine-Tuning: Risks and Mitigation
- URL: http://arxiv.org/abs/2508.14031v1
- Date: Tue, 19 Aug 2025 17:53:35 GMT
- Title: Unintended Misalignment from Agentic Fine-Tuning: Risks and Mitigation
- Authors: Dongyoon Hahm, Taywon Min, Woogyeol Jin, Kimin Lee,
- Abstract summary: Fine-tuning Large Language Models (LLMs) to execute agentic tasks can lead to a higher likelihood of executing harmful tasks.<n>Prefix INjection Guard (PING) prepends automatically generated natural language prefixes to agent responses.<n>PING consistently outperforms existing prompting approaches across diverse benchmarks in both web navigation and code generation tasks.
- Score: 19.30407680164485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Beyond simple text generation, Large Language Models (LLMs) have evolved into agentic systems capable of planning and interacting with external tools to solve complex tasks. This evolution involves fine-tuning LLMs on agent-specific tasks to enhance their proficiency. However, safety concerns are frequently overlooked during this fine-tuning process. In this work, we show that aligned LLMs can become unintentionally misaligned, leading to a higher likelihood of executing harmful tasks and a reduced tendency to refuse them when fine-tuned to execute agentic tasks. To address these safety challenges, we propose Prefix INjection Guard (PING), a simple yet effective method that prepends automatically generated natural language prefixes to agent responses, guiding them to refuse harmful requests while preserving performance on benign tasks. Specifically, we introduce an iterative approach that alternates between (1) generating candidate prefixes and (2) selecting those that optimize both task performance and refusal behavior. Experimental results demonstrate that PING significantly enhances the safety of fine-tuned LLM agents without sacrificing their effectiveness. PING consistently outperforms existing prompting approaches across diverse benchmarks in both web navigation and code generation tasks. Our analysis of internal hidden states via linear probes reveals that prefix tokens are crucial for behavior modification, explaining the performance gains. WARNING: This paper contains contents that are unethical or offensive in nature.
Related papers
- Sleeper Cell: Injecting Latent Malice Temporal Backdoors into Tool-Using LLMs [0.0]
Openweight Large Language Models (LLMs) have democratized agentic AI, yet finetuned weights are frequently shared and adopted with limited scrutiny beyond leaderboard performance.<n>This creates a risk where third-party models are incorporated without strong behavioral guarantees.<n>We show that poisoned models maintain state-of-the-art performance on benign tasks, incentivizing their adoption.
arXiv Detail & Related papers (2026-03-02T22:01:08Z) - From Biased Chatbots to Biased Agents: Examining Role Assignment Effects on LLM Agent Robustness [5.572574491501413]
Large Language Models (LLMs) are increasingly deployed as autonomous agents capable of actions with real-world impacts beyond text generation.<n>While persona-induced biases in text generation are well documented, their effects on agent task performance remain largely unexplored.<n>We present the first systematic case study showing that demographic-based persona assignments can alter LLM agents' behavior and degrade performance across diverse domains.
arXiv Detail & Related papers (2026-01-21T02:43:07Z) - SCOPE: Prompt Evolution for Enhancing Agent Effectiveness [53.75986399936395]
Large Language Model (LLM) agents are increasingly deployed in environments that generate massive, dynamic contexts.<n>While agents have access to this context, their static prompts lack the mechanisms to manage it effectively.<n>We introduce textbfSCOPE (Self-evolving Context Optimization via Prompt Evolution)<n>We propose a Dual-Stream mechanism that balances tactical specificity (resolving immediate errors) with strategic generality (evolving long-term principles)
arXiv Detail & Related papers (2025-12-17T12:25:05Z) - LLAMA: Multi-Feedback Smart Contract Fuzzing Framework with LLM-Guided Seed Generation [56.84049855266145]
We propose a Multi-feedback Smart Contract Fuzzing framework (LLAMA) that integrates evolutionary mutation strategies, and hybrid testing techniques.<n>LLAMA achieves 91% instruction coverage and 90% branch coverage, while detecting 132 out of 148 known vulnerabilities.<n>These results highlight LLAMA's effectiveness, adaptability, and practicality in real-world smart contract security testing scenarios.
arXiv Detail & Related papers (2025-07-16T09:46:58Z) - Explicit Vulnerability Generation with LLMs: An Investigation Beyond Adversarial Attacks [0.5218155982819203]
Large Language Models (LLMs) are increasingly used as code assistants.<n>This study examines a more direct threat: open-source LLMs generating vulnerable code when prompted.
arXiv Detail & Related papers (2025-07-14T08:36:26Z) - AgentAlign: Navigating Safety Alignment in the Shift from Informative to Agentic Large Language Models [23.916663925674737]
Previous work has shown that current LLM-based agents execute numerous malicious tasks even without being attacked.<n>We propose AgentAlign, a novel framework that leverages abstract behavior chains as a medium for safety alignment data synthesis.<n>Our framework enables the generation of highly authentic and executable instructions while capturing complex multi-step dynamics.
arXiv Detail & Related papers (2025-05-29T03:02:18Z) - Watch your steps: Dormant Adversarial Behaviors that Activate upon LLM Finetuning [16.543554028816477]
Finetuning open-weight Large Language Models (LLMs) is standard practice for achieving task-specific performance improvements.<n>Until now, finetuning has been regarded as a controlled and secure process in which training on benign datasets leads to predictable behaviors.<n>We demonstrate, for the first time, that an adversary can create compromised LLMs that are performant and benign, yet exhibit adversarial behaviors once finetuned by downstream users.
arXiv Detail & Related papers (2025-05-22T11:59:44Z) - AgentVigil: Generic Black-Box Red-teaming for Indirect Prompt Injection against LLM Agents [54.29555239363013]
We propose a generic black-box fuzzing framework, AgentVigil, to automatically discover and exploit indirect prompt injection vulnerabilities.<n>We evaluate AgentVigil on two public benchmarks, AgentDojo and VWA-adv, where it achieves 71% and 70% success rates against agents based on o3-mini and GPT-4o.<n>We apply our attacks in real-world environments, successfully misleading agents to navigate to arbitrary URLs, including malicious sites.
arXiv Detail & Related papers (2025-05-09T07:40:17Z) - UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning [17.448966928905733]
Large Language Model (LLM) agents equipped with external tools have become increasingly powerful for complex tasks.<n>We present UDora, a unified red teaming framework designed for LLM agents that dynamically hijacks the agent's reasoning processes to compel malicious behavior.
arXiv Detail & Related papers (2025-02-28T21:30:28Z) - SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM Agents [58.65256663334316]
We present SafeAgentBench -- the first benchmark for safety-aware task planning of embodied LLM agents in interactive simulation environments.<n>SafeAgentBench includes: (1) an executable, diverse, and high-quality dataset of 750 tasks, rigorously curated to cover 10 potential hazards and 3 task types; (2) SafeAgentEnv, a universal embodied environment with a low-level controller, supporting multi-agent execution with 17 high-level actions for 9 state-of-the-art baselines; and (3) reliable evaluation methods from both execution and semantic perspectives.
arXiv Detail & Related papers (2024-12-17T18:55:58Z) - Towards Action Hijacking of Large Language Model-based Agent [23.13653350521422]
We introduce AI$mathbf2$, a novel attack to manipulate the action plans of LLM-based applications.<n>It first collects action-aware knowledge from the victim application.<n>Based on such knowledge, the attacker can generate misleading input, which can mislead the LLM to generate harmful action plans.
arXiv Detail & Related papers (2024-12-14T12:11:26Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - On Prompt-Driven Safeguarding for Large Language Models [172.13943777203377]
We find that in the representation space, the input queries are typically moved by safety prompts in a "higher-refusal" direction.
Inspired by these findings, we propose a method for safety prompt optimization, namely DRO.
Treating a safety prompt as continuous, trainable embeddings, DRO learns to move the queries' representations along or opposite the refusal direction, depending on their harmfulness.
arXiv Detail & Related papers (2024-01-31T17:28:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.