SWE-fficiency: Can Language Models Optimize Real-World Repositories on Real Workloads?
- URL: http://arxiv.org/abs/2511.06090v2
- Date: Wed, 12 Nov 2025 01:20:18 GMT
- Title: SWE-fficiency: Can Language Models Optimize Real-World Repositories on Real Workloads?
- Authors: Jeffrey Jian Ma, Milad Hashemi, Amir Yazdanbakhsh, Kevin Swersky, Ofir Press, Enhui Li, Vijay Janapa Reddi, Parthasarathy Ranganathan,
- Abstract summary: SWE-fficiency is a benchmark for evaluating repository-level performance optimization on real workloads.<n>Our suite contains 498 tasks across nine widely used data-science, machine-learning, and HPC repositories.
- Score: 22.075705411944895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimizing the performance of large-scale software repositories demands expertise in code reasoning and software engineering (SWE) to reduce runtime while preserving program correctness. However, most benchmarks emphasize what to fix rather than how to fix code. We introduce SWE-fficiency, a benchmark for evaluating repository-level performance optimization on real workloads. Our suite contains 498 tasks across nine widely used data-science, machine-learning, and HPC repositories (e.g., numpy, pandas, scipy): given a complete codebase and a slow workload, an agent must investigate code semantics, localize bottlenecks and relevant tests, and produce a patch that matches or exceeds expert speedup while passing the same unit tests. To enable this how-to-fix evaluation, our automated pipeline scrapes GitHub pull requests for performance-improving edits, combining keyword filtering, static analysis, coverage tooling, and execution validation to both confirm expert speedup baselines and identify relevant repository unit tests. Empirical evaluation of state-of-the-art agents reveals significant underperformance. On average, agents achieve less than 0.15x the expert speedup: agents struggle in localizing optimization opportunities, reasoning about execution across functions, and maintaining correctness in proposed edits. We release the benchmark and accompanying data pipeline to facilitate research on automated performance engineering and long-horizon software reasoning.
Related papers
- ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads? [0.8749675983608171]
We introduce ISO-Bench, a benchmark for coding agents to test their capabilities on real-world inference tasks.<n>We curated 54 tasks from merged pull requests with measurable performance improvements.
arXiv Detail & Related papers (2026-02-23T08:37:53Z) - SWE-World: Building Software Engineering Agents in Docker-Free Environments [91.17484806743641]
SWE-World is a Docker-free framework that replaces physical execution environments with a learned surrogate for training and evaluating software engineering agents.<n>We show that SWE-World raises Qwen2.5-Coder-32B from 6.2% to 52.0% via Docker-free SFT, 55.0% with Docker-free RL, and 68.2% with further TTS.
arXiv Detail & Related papers (2026-02-03T11:44:39Z) - RealSec-bench: A Benchmark for Evaluating Secure Code Generation in Real-World Repositories [58.32028251925354]
Large Language Models (LLMs) have demonstrated remarkable capabilities in code generation, but their proficiency in producing secure code remains a critical, under-explored area.<n>We introduce RealSec-bench, a new benchmark for secure code generation meticulously constructed from real-world, high-risk Java repositories.
arXiv Detail & Related papers (2026-01-30T08:29:01Z) - TimeMachine-bench: A Benchmark for Evaluating Model Capabilities in Repository-Level Migration Tasks [12.573674060643787]
TimeMachine-bench is a benchmark designed to evaluate software migration in real-world Python projects.<n>Our benchmark consists of GitHub repositories whose tests begin to fail in response to dependency updates.
arXiv Detail & Related papers (2026-01-30T05:42:45Z) - Process-Level Trajectory Evaluation for Environment Configuration in Software Engineering Agents [71.85020581835042]
Large language model-based agents show promise for software engineering, but environment configuration remains a bottleneck.<n>Existing benchmarks assess only end-to-end build/test success, obscuring where and why agents succeed or fail.<n>We introduce Enconda-bench, which provides process-level trajectory assessment of fine-grained agent capabilities during environment setup-planning.
arXiv Detail & Related papers (2025-10-29T16:59:07Z) - PerfBench: Can Agents Resolve Real-World Performance Bugs? [4.879400115033142]
PerfBench is a benchmark comprising 81 real-world performance bug-fixing tasks from GitHub.<n>PerfBench features a novel evaluation harness that allows agents to generate their own performance benchmarks.<n>We develop OpenHands-Perf-Agent, which incorporates performance-aware tooling and instructions and achieves a 20% success rate on the benchmark.
arXiv Detail & Related papers (2025-09-28T22:00:33Z) - From Benchmark Data To Applicable Program Repair: An Experience Report [1.6913109767046948]
This paper describes our approach to automated program repair.<n>We combine various techniques from the literature to achieve this.<n>Experiments show that our approach performs better than other techniques on standard benchmarks.<n>On closer inspection, none of these techniques work on realistic defects that we see in industry.
arXiv Detail & Related papers (2025-08-22T03:59:27Z) - SWE-Perf: Can Language Models Optimize Code Performance on Real-World Repositories? [32.67971774793393]
SWE-Perf is the first benchmark designed to evaluate Large Language Models (LLMs) on code performance optimization tasks within authentic repository contexts.<n>SWE-Perf comprises 140 carefully curated instances, each derived from performance-improving pull requests from popular GitHub repositories.
arXiv Detail & Related papers (2025-07-16T17:05:17Z) - FieldWorkArena: Agentic AI Benchmark for Real Field Work Tasks [52.47895046206854]
FieldWorkArena is a benchmark for agentic AI targeting real-world field work.<n>This paper defines a new action space that agentic AI should possess for real world work environment benchmarks.
arXiv Detail & Related papers (2025-05-26T08:21:46Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - PACE: A Program Analysis Framework for Continuous Performance Prediction [0.0]
PACE is a program analysis framework that provides continuous feedback on the performance impact of pending code updates.
We design performance microbenchmarks by mapping the execution time of functional test cases given a code update.
Our experiments achieved significant performance in predicting code performance, outperforming current state-of-the-art by 75% on neural-represented code stylometry features.
arXiv Detail & Related papers (2023-12-01T20:43:34Z) - Learning Performance-Improving Code Edits [107.21538852090208]
We introduce a framework for adapting large language models (LLMs) to high-level program optimization.
First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs.
For prompting, we propose retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.
arXiv Detail & Related papers (2023-02-15T18:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.