Evaluation Gaps in Machine Learning Practice
- URL: http://arxiv.org/abs/2205.05256v1
- Date: Wed, 11 May 2022 04:00:44 GMT
- Title: Evaluation Gaps in Machine Learning Practice
- Authors: Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller,
Vinodkumar Prabhakaran
- Abstract summary: In practice, evaluations of machine learning models frequently focus on a narrow range of decontextualized predictive behaviours.
We examine the evaluation gaps between the idealized breadth of evaluation concerns and the observed narrow focus of actual evaluations.
By studying these properties, we demonstrate the machine learning discipline's implicit assumption of a range of commitments which have normative impacts.
- Score: 13.963766987258161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forming a reliable judgement of a machine learning (ML) model's
appropriateness for an application ecosystem is critical for its responsible
use, and requires considering a broad range of factors including harms,
benefits, and responsibilities. In practice, however, evaluations of ML models
frequently focus on only a narrow range of decontextualized predictive
behaviours. We examine the evaluation gaps between the idealized breadth of
evaluation concerns and the observed narrow focus of actual evaluations.
Through an empirical study of papers from recent high-profile conferences in
the Computer Vision and Natural Language Processing communities, we demonstrate
a general focus on a handful of evaluation methods. By considering the metrics
and test data distributions used in these methods, we draw attention to which
properties of models are centered in the field, revealing the properties that
are frequently neglected or sidelined during evaluation. By studying these
properties, we demonstrate the machine learning discipline's implicit
assumption of a range of commitments which have normative impacts; these
include commitments to consequentialism, abstractability from context, the
quantifiability of impacts, the limited role of model inputs in evaluation, and
the equivalence of different failure modes. Shedding light on these assumptions
enables us to question their appropriateness for ML system contexts, pointing
the way towards more contextualized evaluation methodologies for robustly
examining the trustworthiness of ML models
Related papers
- Analyzing Fairness of Computer Vision and Natural Language Processing Models [1.0923877073891446]
Machine learning (ML) algorithms play a crucial role in decision making across diverse fields such as healthcare, finance, education, and law enforcement.
Despite their widespread adoption, these systems raise ethical and social concerns due to potential biases and fairness issues.
This study focuses on evaluating and improving the fairness of Computer Vision and Natural Language Processing (NLP) models applied to unstructured datasets.
arXiv Detail & Related papers (2024-12-13T06:35:55Z) - Analyzing Fairness of Classification Machine Learning Model with Structured Dataset [1.0923877073891446]
This study investigates the fairness of machine learning models applied to structured datasets in classification tasks.
Three fairness libraries; Fairlearn by Microsoft, AIF360 by IBM, and the What If Tool by Google were employed.
The research aims to assess the extent of bias in the ML models, compare the effectiveness of these libraries, and derive actionable insights for practitioners.
arXiv Detail & Related papers (2024-12-13T06:31:09Z) - Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) are used to automate decision-making tasks.
In this paper, we evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types.
These benchmarks allow us to isolate the ability of LLMs to accurately predict changes resulting from their ability to memorize facts or find other shortcuts.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models [53.84677081899392]
KIEval is a Knowledge-grounded Interactive Evaluation framework for large language models.
It incorporates an LLM-powered "interactor" role for the first time to accomplish a dynamic contamination-resilient evaluation.
Extensive experiments on seven leading LLMs across five datasets validate KIEval's effectiveness and generalization.
arXiv Detail & Related papers (2024-02-23T01:30:39Z) - F-Eval: Assessing Fundamental Abilities with Refined Evaluation Methods [102.98899881389211]
We propose F-Eval, a bilingual evaluation benchmark to evaluate the fundamental abilities, including expression, commonsense and logic.
For reference-free subjective tasks, we devise new evaluation methods, serving as alternatives to scoring by API models.
arXiv Detail & Related papers (2024-01-26T13:55:32Z) - Don't Make Your LLM an Evaluation Benchmark Cheater [142.24553056600627]
Large language models(LLMs) have greatly advanced the frontiers of artificial intelligence, attaining remarkable improvement in model capacity.
To assess the model performance, a typical approach is to construct evaluation benchmarks for measuring the ability level of LLMs.
We discuss the potential risk and impact of inappropriately using evaluation benchmarks and misleadingly interpreting the evaluation results.
arXiv Detail & Related papers (2023-11-03T14:59:54Z) - A Call to Reflect on Evaluation Practices for Failure Detection in Image
Classification [0.491574468325115]
We present a large-scale empirical study for the first time enabling benchmarking confidence scoring functions.
The revelation of a simple softmax response baseline as the overall best performing method underlines the drastic shortcomings of current evaluation.
arXiv Detail & Related papers (2022-11-28T12:25:27Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.