Where Does My Model Underperform? A Human Evaluation of Slice Discovery
Algorithms
- URL: http://arxiv.org/abs/2306.08167v2
- Date: Fri, 9 Feb 2024 06:14:54 GMT
- Title: Where Does My Model Underperform? A Human Evaluation of Slice Discovery
Algorithms
- Authors: Nari Johnson, \'Angel Alexander Cabrera, Gregory Plumb, Ameet
Talwalkar
- Abstract summary: New slice discovery algorithms aim to group together coherent and high-error subsets of data.
We show 40 slices output by two state-of-the-art slice discovery algorithms to users, and ask them to form hypotheses about an object detection model.
Our results provide positive evidence that these tools provide some benefit over a naive baseline, and also shed light on challenges faced by users during the hypothesis formation step.
- Score: 24.127380328812855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) models that achieve high average accuracy can still
underperform on semantically coherent subsets ("slices") of data. This behavior
can have significant societal consequences for the safety or bias of the model
in deployment, but identifying these underperforming slices can be difficult in
practice, especially in domains where practitioners lack access to group
annotations to define coherent subsets of their data. Motivated by these
challenges, ML researchers have developed new slice discovery algorithms that
aim to group together coherent and high-error subsets of data. However, there
has been little evaluation focused on whether these tools help humans form
correct hypotheses about where (for which groups) their model underperforms. We
conduct a controlled user study (N = 15) where we show 40 slices output by two
state-of-the-art slice discovery algorithms to users, and ask them to form
hypotheses about an object detection model. Our results provide positive
evidence that these tools provide some benefit over a naive baseline, and also
shed light on challenges faced by users during the hypothesis formation step.
We conclude by discussing design opportunities for ML and HCI researchers. Our
findings point to the importance of centering users when creating and
evaluating new tools for slice discovery.
Related papers
- SubROC: AUC-Based Discovery of Exceptional Subgroup Performance for Binary Classifiers [1.533848041901807]
SubROC is a framework based on Model Mining for reliably and efficiently finding strengths and weaknesses of classification models.<n>It incorporates common evaluation measures (ROC and PR AUC), efficient search space pruning for fast exhaustive subgroup search, control for class imbalance, adjustment for redundant patterns, and significance testing.
arXiv Detail & Related papers (2025-05-16T14:18:40Z) - Exploring Training and Inference Scaling Laws in Generative Retrieval [50.82554729023865]
We investigate how model size, training data scale, and inference-time compute jointly influence generative retrieval performance.
Our experiments show that n-gram-based methods demonstrate strong alignment with both training and inference scaling laws.
We find that LLaMA models consistently outperform T5 models, suggesting a particular advantage for larger decoder-only models in generative retrieval.
arXiv Detail & Related papers (2025-03-24T17:59:03Z) - Maximizing Signal in Human-Model Preference Alignment [0.0]
This paper argues that in cases in which end users need to agree with the decisions made by ML models, models should be trained and evaluated on data that represent their preferences.
We show that noise in labeling disagreement can be minimized by adhering to proven methodological best practices.
arXiv Detail & Related papers (2025-03-06T19:10:57Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding [29.07617945233152]
Data slice finding is an emerging technique for validating machine learning (ML) models by identifying and analyzing subgroups in a dataset that exhibit poor performance.
This approach faces significant challenges, including the laborious and costly requirement for additional metadata.
We introduce AttributionScanner, an innovative human-in-the-loop Visual Analytics (VA) system, designed for metadata-free data slice finding.
Our system identifies interpretable data slices that involve common model behaviors and visualizes these patterns through an Attribution Mosaic design.
arXiv Detail & Related papers (2024-01-12T09:17:32Z) - Error Discovery by Clustering Influence Embeddings [7.27282591214364]
We present a method for identifying groups of test examples -- slices -- on which a model under-performs.
We formalize coherence as a key property that any slice discovery method should satisfy.
We derive a new slice discovery method, InfEmbed, which satisfies coherence by returning slices whose examples are influenced similarly by the training data.
arXiv Detail & Related papers (2023-12-07T21:42:55Z) - Towards Better Modeling with Missing Data: A Contrastive Learning-based
Visual Analytics Perspective [7.577040836988683]
Missing data can pose a challenge for machine learning (ML) modeling.
Current approaches are categorized into feature imputation and label prediction.
This study proposes a Contrastive Learning framework to model observed data with missing values.
arXiv Detail & Related papers (2023-09-18T13:16:24Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - How Predictable Are Large Language Model Capabilities? A Case Study on
BIG-bench [52.11481619456093]
We study the performance prediction problem on experiment records from BIG-bench.
An $R2$ score greater than 95% indicates the presence of learnable patterns within the experiment records.
We find a subset as informative as BIG-bench Hard for evaluating new model families, while being $3times$ smaller.
arXiv Detail & Related papers (2023-05-24T09:35:34Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z) - PermuteAttack: Counterfactual Explanation of Machine Learning Credit
Scorecards [0.0]
This paper is a note on new directions and methodologies for validation and explanation of Machine Learning (ML) models employed for retail credit scoring in finance.
Our proposed framework draws motivation from the field of Artificial Intelligence (AI) security and adversarial ML.
arXiv Detail & Related papers (2020-08-24T00:05:13Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.