AutoLibra: Agent Metric Induction from Open-Ended Human Feedback
- URL: http://arxiv.org/abs/2505.02820v3
- Date: Wed, 29 Oct 2025 23:29:35 GMT
- Title: AutoLibra: Agent Metric Induction from Open-Ended Human Feedback
- Authors: Hao Zhu, Phil Cuvin, Xinkai Yu, Charlotte Ka Yee Yan, Jason Zhang, Diyi Yang,
- Abstract summary: AutoLibra transforms open-ended human feedback into metrics for evaluating fine-grained behaviors in agent trajectories.<n>We experimentally demonstrate AutoLibra's ability to induce more concrete agent evaluation metrics than the ones proposed in previous agent evaluation benchmarks.<n>Our results suggest that AutoLibra is a powerful task-agnostic tool for evaluating and improving language agents.
- Score: 43.36710903170168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agents are predominantly evaluated and optimized via task success metrics, which are coarse, rely on manual design from experts, and fail to reward intermediate emergent behaviors. We propose **AutoLibra**, a framework for agent evaluation, that transforms open-ended human feedback *e.g.* "If you find that the button is disabled, don't click it again", or "This agent has too much autonomy to decide what to do on its own" into metrics for evaluating fine-grained behaviors in agent trajectories. AutoLibra accomplishes this by grounding feedback to an agent's behavior, clustering similar positive and negative behaviors, and creating concrete metrics with clear definitions and concrete examples, which can be used for prompting LLM-as-a-Judge as evaluators. We further propose two meta metrics to evaluate the alignment of a set of (induced) metrics with open feedback: "coverage" and "redundancy". Through optimizing these meta-metrics, we experimentally demonstrate AutoLibra's ability to induce more concrete agent evaluation metrics than the ones proposed in previous agent evaluation benchmarks and discover new metrics to analyze agents. We also present two applications of AutoLibra in agent improvement: First, we show that AutoLibra serve human prompt engineers for diagonalize agent failures and improve prompts iterative. Moreover, we find that AutoLibra can induce metrics for automatic optimization for agents, which makes agents improve through self-regulation. Our results suggest that AutoLibra is a powerful task-agnostic tool for evaluating and improving language agents.
Related papers
- AgentSelect: Benchmark for Narrative Query-to-Agent Recommendation [39.61543921719145]
AgentSelect is a benchmark that reframes agent selection as narrative query-to-agent recommendation.<n>It converts heterogeneous evaluation artifacts into unified, positive-only interaction data.<n>AgentSelect provides the first unified data and evaluation infrastructure for agent recommendation.
arXiv Detail & Related papers (2026-03-04T06:17:51Z) - AutoMetrics: Approximate Human Judgements with Automatically Generated Evaluators [57.003100107659684]
AutoMetrics is a framework for synthesizing evaluation metrics under low-data constraints.<n>We show that AutoMetrics can be used as a proxy reward to equal effect as a verifiable reward.
arXiv Detail & Related papers (2025-12-19T06:32:46Z) - How can we assess human-agent interactions? Case studies in software agent design [52.953425368394306]
We make two major steps towards the rigorous assessment of human-agent interactions.<n>We propose PULSE, a framework for more efficient human-centric evaluation of agent designs.<n>We deploy the framework on a large-scale web platform built around the open-source software agent OpenHands.
arXiv Detail & Related papers (2025-10-10T19:04:28Z) - AutoSCORE: Enhancing Automated Scoring with Multi-Agent Large Language Models via Structured Component Recognition [27.312190686305588]
Large language models (LLMs) have shown strong potential in automated scoring.<n>Their use as end-to-end raters faces challenges such as low accuracy, prompt sensitivity, limited interpretability, and rubric misalignment.<n>We propose AutoSCORE, a multi-agent LLM framework enhancing automated scoring via rubric-aligned Structured COmponent REcognition.
arXiv Detail & Related papers (2025-09-26T05:45:14Z) - SI-Agent: An Agentic Framework for Feedback-Driven Generation and Tuning of Human-Readable System Instructions for Large Language Models [0.0]
System Instructions (SIs) are pivotal for guiding Large Language Models (LLMs)<n>Existing automated methods frequently generate non-human-readable "soft prompts," sacrificing interpretability.<n>This paper introduces SI-Agent, a novel agentic framework designed to automatically generate and iteratively refine human-readable SIs.
arXiv Detail & Related papers (2025-07-03T23:44:50Z) - AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories [59.214178488091584]
We propose AgentRewardBench, the first benchmark to assess the effectiveness of LLM judges for evaluating web agents.<n>Using our benchmark, we evaluate 12 LLM judges and find that no single LLM excels across all benchmarks.<n>We also find that the rule-based evaluation used by common benchmarks tends to underreport the success rate of web agents.
arXiv Detail & Related papers (2025-04-11T19:49:22Z) - AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery [20.333502467911828]
We introduce AgentAda, the first analytics agent that can learn and use new analytics skills to extract more specialized insights.<n>Unlike existing methods that require users to manually decide which data analytics method to apply, AgentAda automatically identifies the skill needed to perform the analysis.<n>We conducted a human evaluation demonstrating that AgentAda provides more insightful analytics than existing tools, with 48.78% of evaluators preferring its analyses, compared to 27.67% for the unskilled agent.
arXiv Detail & Related papers (2025-04-10T03:27:25Z) - AutoEval: A Practical Framework for Autonomous Evaluation of Mobile Agents [5.515875179998062]
AutoEval is an autonomous agent evaluation framework that tests a mobile agent without any manual effort.<n>We implement a prototype of our framework and validate the automatically generated task reward signals, finding over 93% coverage to human-annotated reward signals.<n>We evaluate the state-of-the-art mobile agents using our framework, providing detailed insights into their performance characteristics and limitations.
arXiv Detail & Related papers (2025-03-04T08:44:30Z) - QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search [89.97082652805904]
We propose QLASS (Q-guided Language Agent Stepwise Search), to automatically generate annotations by estimating Q-values.<n>With the stepwise guidance, we propose a Q-guided generation strategy to enable language agents to better adapt to long-term value.<n>We empirically demonstrate that QLASS can lead to more effective decision making through qualitative analysis.
arXiv Detail & Related papers (2025-02-04T18:58:31Z) - Towards Realistic Evaluation of Commit Message Generation by Matching Online and Offline Settings [77.20838441870151]
We use an online metric - the number of edits users introduce before committing the generated messages to the VCS - to select metrics for offline experiments.<n>We collect a dataset with 57 pairs consisting of commit messages generated by GPT-4 and their counterparts edited by human experts.<n>Our results indicate that edit distance exhibits the highest correlation with the online metric, whereas commonly used similarity metrics such as BLEU and METEOR demonstrate low correlation.
arXiv Detail & Related papers (2024-10-15T20:32:07Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [112.04307762405669]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.<n>G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z) - Towards Interpretable and Efficient Automatic Reference-Based
Summarization Evaluation [160.07938471250048]
Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics.
We develop strong-performing automatic metrics for reference-based summarization evaluation.
arXiv Detail & Related papers (2023-03-07T02:49:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.