AutoLibra: Agent Metric Induction from Open-Ended Feedback
- URL: http://arxiv.org/abs/2505.02820v2
- Date: Mon, 28 Jul 2025 07:46:27 GMT
- Title: AutoLibra: Agent Metric Induction from Open-Ended Feedback
- Authors: Hao Zhu, Phil Cuvin, Xinkai Yu, Charlotte Ka Yee Yan, Jason Zhang, Diyi Yang,
- Abstract summary: AutoLibra is a framework for agent evaluation that transforms open-ended human feedback.<n>We experimentally demonstrate AutoLibra's ability to induce more concrete agent evaluation metrics.<n>We show that AutoLibra-induced metrics serve as better prompt-engineering targets than the task success rate.
- Score: 44.905607036805634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agents are predominantly evaluated and optimized via task success metrics, which are coarse, rely on manual design from experts, and fail to reward intermediate emergent behaviors. We propose AutoLibra, a framework for agent evaluation, that transforms open-ended human feedback e.g. "If you find that the button is disabled, don't click it again", or "This agent has too much autonomy to decide what to do on its own" into metrics for evaluating fine-grained behaviors in agent trajectories. AutoLibra accomplishes this by grounding feedback to an agent's behavior, clustering similar positive and negative behaviors, and creating concrete metrics with clear definitions and concrete examples, which can be used for prompting LLM-as-a-Judge as evaluators. We further propose two meta-metrics to evaluate the alignment of a set of (induced) metrics with open feedback: "coverage" and "redundancy". Through optimizing these meta-metrics, we experimentally demonstrate AutoLibra's ability to induce more concrete agent evaluation metrics than the ones proposed in previous agent evaluation benchmarks and discover new metrics to analyze agents. We also present two applications of AutoLibra in agent improvement: First, we show that AutoLibra-induced metrics serve as better prompt-engineering targets than the task success rate on a wide range of text game tasks, improving agent performance over baseline by a mean of 20%. Second, we show that AutoLibra can iteratively select high-quality fine-tuning data for web navigation agents. Our results suggest that AutoLibra is a powerful task-agnostic tool for evaluating and improving language agents.
Related papers
- SI-Agent: An Agentic Framework for Feedback-Driven Generation and Tuning of Human-Readable System Instructions for Large Language Models [0.0]
System Instructions (SIs) are pivotal for guiding Large Language Models (LLMs)<n>Existing automated methods frequently generate non-human-readable "soft prompts," sacrificing interpretability.<n>This paper introduces SI-Agent, a novel agentic framework designed to automatically generate and iteratively refine human-readable SIs.
arXiv Detail & Related papers (2025-07-03T23:44:50Z) - AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories [59.214178488091584]
We propose AgentRewardBench, the first benchmark to assess the effectiveness of LLM judges for evaluating web agents.<n>Using our benchmark, we evaluate 12 LLM judges and find that no single LLM excels across all benchmarks.<n>We also find that the rule-based evaluation used by common benchmarks tends to underreport the success rate of web agents.
arXiv Detail & Related papers (2025-04-11T19:49:22Z) - AgentAda: Skill-Adaptive Data Analytics for Tailored Insight Discovery [20.333502467911828]
We introduce AgentAda, the first analytics agent that can learn and use new analytics skills to extract more specialized insights.<n>Unlike existing methods that require users to manually decide which data analytics method to apply, AgentAda automatically identifies the skill needed to perform the analysis.<n>We conducted a human evaluation demonstrating that AgentAda provides more insightful analytics than existing tools, with 48.78% of evaluators preferring its analyses, compared to 27.67% for the unskilled agent.
arXiv Detail & Related papers (2025-04-10T03:27:25Z) - AutoEval: A Practical Framework for Autonomous Evaluation of Mobile Agents [5.515875179998062]
AutoEval is an autonomous agent evaluation framework that tests a mobile agent without any manual effort.<n>We implement a prototype of our framework and validate the automatically generated task reward signals, finding over 93% coverage to human-annotated reward signals.<n>We evaluate the state-of-the-art mobile agents using our framework, providing detailed insights into their performance characteristics and limitations.
arXiv Detail & Related papers (2025-03-04T08:44:30Z) - QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search [89.97082652805904]
We propose QLASS (Q-guided Language Agent Stepwise Search), to automatically generate annotations by estimating Q-values.<n>With the stepwise guidance, we propose a Q-guided generation strategy to enable language agents to better adapt to long-term value.<n>We empirically demonstrate that QLASS can lead to more effective decision making through qualitative analysis.
arXiv Detail & Related papers (2025-02-04T18:58:31Z) - Towards Realistic Evaluation of Commit Message Generation by Matching Online and Offline Settings [77.20838441870151]
We use an online metric - the number of edits users introduce before committing the generated messages to the VCS - to select metrics for offline experiments.<n>We collect a dataset with 57 pairs consisting of commit messages generated by GPT-4 and their counterparts edited by human experts.<n>Our results indicate that edit distance exhibits the highest correlation with the online metric, whereas commonly used similarity metrics such as BLEU and METEOR demonstrate low correlation.
arXiv Detail & Related papers (2024-10-15T20:32:07Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z) - Towards Interpretable and Efficient Automatic Reference-Based
Summarization Evaluation [160.07938471250048]
Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics.
We develop strong-performing automatic metrics for reference-based summarization evaluation.
arXiv Detail & Related papers (2023-03-07T02:49:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.