From Benchmarks to Business Impact: Deploying IBM Generalist Agent in Enterprise Production
- URL: http://arxiv.org/abs/2510.23856v1
- Date: Mon, 27 Oct 2025 20:55:00 GMT
- Title: From Benchmarks to Business Impact: Deploying IBM Generalist Agent in Enterprise Production
- Authors: Segev Shlomov, Alon Oved, Sami Marreed, Ido Levy, Offer Akrabi, Avi Yaeli, Łukasz Strąk, Elizabeth Koumpan, Yinon Goldshtein, Eilam Shapira, Nir Mashkif, Asaf Adi,
- Abstract summary: This paper reports IBM's experience developing and piloting the Computer Using Generalist Agent (CUGA)<n>CUGA adopts a hierarchical planner--executor architecture with strong analytical foundations.<n>It was evaluated in a pilot within the Business-Process-Outsourcing talent acquisition domain.
- Score: 6.189323683437766
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Agents are rapidly advancing in automating digital work, but enterprises face a harder challenge: moving beyond prototypes to deployed systems that deliver measurable business value. This path is complicated by fragmented frameworks, slow development, and the absence of standardized evaluation practices. Generalist agents have emerged as a promising direction, excelling on academic benchmarks and offering flexibility across task types, applications, and modalities. Yet, evidence of their use in production enterprise settings remains limited. This paper reports IBM's experience developing and piloting the Computer Using Generalist Agent (CUGA), which has been open-sourced for the community (https://github.com/cuga-project/cuga-agent). CUGA adopts a hierarchical planner--executor architecture with strong analytical foundations, achieving state-of-the-art performance on AppWorld and WebArena. Beyond benchmarks, it was evaluated in a pilot within the Business-Process-Outsourcing talent acquisition domain, addressing enterprise requirements for scalability, auditability, safety, and governance. To support assessment, we introduce BPO-TA, a 26-task benchmark spanning 13 analytics endpoints. In preliminary evaluations, CUGA approached the accuracy of specialized agents while indicating potential for reducing development time and cost. Our contribution is twofold: presenting early evidence of generalist agents operating at enterprise scale, and distilling technical and organizational lessons from this initial pilot. We outline requirements and next steps for advancing research-grade architectures like CUGA into robust, enterprise-ready systems.
Related papers
- EntWorld: A Holistic Environment and Benchmark for Verifiable Enterprise GUI Agents [12.7922877987936]
EntWorld is a large-scale benchmark consisting of 1,756 tasks across six representative enterprise domains.<n>We propose a schema-grounded task generation framework that directly reverse-engineers business logic from underlying database schemas.<n>We show that state-of-the-art models achieve 47.61% success rate on EntWorld, substantially lower than the human performance.
arXiv Detail & Related papers (2026-01-25T06:58:15Z) - ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development [72.4729759618632]
We introduce ABC-Bench, a benchmark to evaluate agentic backend coding within a realistic, executable workflow.<n>We curated 224 practical tasks spanning 8 languages and 19 frameworks from open-source repositories.<n>Our evaluation reveals that even state-of-the-art models struggle to deliver reliable performance on these holistic tasks.
arXiv Detail & Related papers (2026-01-16T08:23:52Z) - APEX-SWE [4.927317067589892]
We introduce the AI Productivity Index for Software Engineering (APEX-SWE)<n>APEX-SWE is a benchmark for assessing whether frontier AI models can execute economically valuable software engineering work.<n> Gemini 3 Pro (Thinking = High) performs best, with a Pass@1 score of 25%.
arXiv Detail & Related papers (2026-01-13T18:44:08Z) - Continuous Benchmark Generation for Evaluating Enterprise-scale LLM Agents [23.277131100190086]
We propose a process of benchmark generation that helps evolve the benchmarks as the requirements change and perform robust evaluation of evolving AI agents.<n>Our approach relies on semi-structured documents where developers express the high-level intent, and uses state-of-the-art LLMs to generate benchmarks from just a small number of such documents.
arXiv Detail & Related papers (2025-11-13T07:48:22Z) - Alita-G: Self-Evolving Generative Agent for Agent Generation [54.49365835457433]
We present ALITA-G, a framework that transforms a general-purpose agent into a domain expert.<n>In this framework, a generalist agent executes a curated suite of target-domain tasks.<n>It attains strong gains while reducing computation costs.
arXiv Detail & Related papers (2025-10-27T17:59:14Z) - A Comprehensive Survey on Benchmarks and Solutions in Software Engineering of LLM-Empowered Agentic System [56.40989626804489]
This survey provides the first holistic analysis of Large Language Models-powered software engineering.<n>We review over 150 recent papers and propose a taxonomy along two key dimensions: (1) Solutions, categorized into prompt-based, fine-tuning-based, and agent-based paradigms, and (2) Benchmarks, including tasks such as code generation, translation, and repair.
arXiv Detail & Related papers (2025-10-10T06:56:50Z) - Shell or Nothing: Real-World Benchmarks and Memory-Activated Agents for Automated Penetration Testing [23.554239007767276]
We introduce the first real-world, agent-oriented pentesting benchmark, TermiBench.<n>We propose TermiAgent, a multi-agent penetration testing framework.<n>In evaluations, our work outperforms state-of-the-art agents, exhibiting stronger penetration testing capability.
arXiv Detail & Related papers (2025-09-11T07:30:44Z) - OpenCUA: Open Foundations for Computer-Use Agents [74.61449905487565]
Vision-language models have demonstrated impressive capabilities as computer-use agents (CUAs)<n>We propose OpenCUA, a comprehensive open-source framework for scaling CUA data and foundation models.<n>Our end-to-end agent models demonstrate strong performance across CUA benchmarks.
arXiv Detail & Related papers (2025-08-12T17:52:32Z) - InternBootcamp Technical Report: Boosting LLM Reasoning with Verifiable Task Scaling [71.37579508777843]
Large language models (LLMs) have revolutionized artificial intelligence by enabling complex reasoning capabilities.<n>To address this gap, we present InternBootcamp, an open-source framework comprising 1000+ domain-diverse task environments.
arXiv Detail & Related papers (2025-08-12T05:00:00Z) - OS Agents: A Survey on MLLM-based Agents for General Computing Devices Use [101.57043903478257]
The dream to create AI assistants as capable and versatile as the fictional J.A.R.V.I.S from Iron Man has long captivated imaginations.<n>With the evolution of (multi-modal) large language models ((M)LLMs), this dream is closer to reality.<n>This survey aims to consolidate the state of OS Agents research, providing insights to guide both academic inquiry and industrial development.
arXiv Detail & Related papers (2025-08-06T14:33:45Z) - Towards Enterprise-Ready Computer Using Generalist Agent [2.7426201283942766]
This paper presents our ongoing work toward developing an enterprise-ready Computer Using Generalist Agent (CUGA) system.<n>By integrating state-of-the-art agentic AI techniques with a systematic approach to iterative evaluation, analysis, and refinement, we have achieved rapid and cost-effective performance gains.
arXiv Detail & Related papers (2025-02-24T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.