What Does Your Benchmark Really Measure? A Framework for Robust Inference of AI Capabilities
- URL: http://arxiv.org/abs/2509.19590v1
- Date: Tue, 23 Sep 2025 21:29:04 GMT
- Title: What Does Your Benchmark Really Measure? A Framework for Robust Inference of AI Capabilities
- Authors: Nathanael Jo, Ashia Wilson,
- Abstract summary: Evaluations of generative models on benchmark data are now ubiquitous.<n>Yet growing skepticism surrounds their reliability.<n>How can we know that a reported accuracy genuinely reflects a model's true performance?<n>We make this step explicit by proposing a principled framework for evaluation as inference.
- Score: 0.773472615056109
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Evaluations of generative models on benchmark data are now ubiquitous, and their outcomes critically shape public and scientific expectations of AI's capabilities. Yet growing skepticism surrounds their reliability. How can we know that a reported accuracy genuinely reflects a model's true performance? Evaluations are often presented as simple measurements, but in reality they are inferences: to treat benchmark scores as evidence of capability is already to assume a theory of what capability is and how it manifests in a test. We make this step explicit by proposing a principled framework for evaluation as inference: begin from a theory of capability, and then derive methods for estimating it. This perspective, familiar in fields such as psychometrics, has not yet become commonplace in AI evaluation. As a proof of concept, we address a central challenge that undermines reliability: sensitivity to perturbations. After formulating a model of ability, we introduce methods that infer ability while accounting for uncertainty from sensitivity and finite samples, including an adaptive algorithm that significantly reduces sample complexity. Together, these contributions lay the groundwork for more reliable and trustworthy estimates of AI capabilities as measured through benchmarks.
Related papers
- AICO: Feature Significance Tests for Supervised Learning [0.9474649136535703]
This paper develops model- and distribution-agnostic significance tests to assess the influence of input features in any regression or classification algorithm.<n>We construct a uniformly most powerful, randomized sign test for this median, yielding exact p-values for assessing feature significance and confidence intervals.<n>Experiments on synthetic tasks validate its statistical and computational advantages, and applications to real-world data illustrate its practical utility.
arXiv Detail & Related papers (2025-06-29T21:15:40Z) - Query-Level Uncertainty in Large Language Models [39.59641844929696]
We propose a method to detect knowledge boundaries via Query-Level Uncertainty.<n>This method estimates if a model is capable of answering a given query before generating any tokens, thus avoiding the generation cost.<n>We demonstrate its benefits in adaptive inference settings, showing that for RAG and model cascading it reduces inference costs while preserving overall performance.
arXiv Detail & Related papers (2025-06-11T12:39:48Z) - How Many Ratings per Item are Necessary for Reliable Significance Testing? [7.422152765037947]
A cornerstone of machine learning evaluation is the assumption that model and human responses are reliable enough to evaluate models against unitary, authoritative, gold standard'' data.<n>We adapt a method to determine whether an (existing or planned) dataset has enough responses per item to assure reliable null hypothesis statistical testing.<n>We show how our methods can help AI researchers make better decisions about how to collect data for AI evaluation.
arXiv Detail & Related papers (2024-12-04T02:31:28Z) - On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [68.62012304574012]
multimodal generative models have sparked critical discussions on their reliability, fairness and potential for misuse.<n>We propose an evaluation framework to assess model reliability by analyzing responses to global and local perturbations in the embedding space.<n>Our method lays the groundwork for detecting unreliable, bias-injected models and tracing the provenance of embedded biases.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Automated Trustworthiness Testing for Machine Learning Classifiers [3.3423762257383207]
This paper proposes TOWER, the first technique to automatically create trustworthiness oracles that determine whether text classifier predictions are trustworthy.
Our hypothesis is that a prediction is trustworthy if the words in its explanation are semantically related to the predicted class.
The results show that TOWER can detect a decrease in trustworthiness as noise increases, but is not effective when evaluated against the human-labeled dataset.
arXiv Detail & Related papers (2024-06-07T20:25:05Z) - A Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attributions [60.06461883533697]
We first identify a set of fidelity criteria that reliable benchmarks for attribution methods are expected to fulfill.<n>We then introduce a Backdoor-based eXplainable AI benchmark (BackX) that adheres to the desired fidelity criteria.<n>Our analysis also offers insights into defending against neural Trojans by utilizing the attributions.
arXiv Detail & Related papers (2024-05-02T13:48:37Z) - Position: AI Evaluation Should Learn from How We Test Humans [65.36614996495983]
We argue that psychometrics, a theory originating in the 20th century for human assessment, could be a powerful solution to the challenges in today's AI evaluations.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework [91.94389491920309]
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs.
The existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples.
We set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to exploit the advantages of adversarial attacks.
arXiv Detail & Related papers (2023-05-29T14:55:20Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.