A Probabilistic Method to Predict Classifier Accuracy on Larger Datasets
given Small Pilot Data
- URL: http://arxiv.org/abs/2311.18025v1
- Date: Wed, 29 Nov 2023 19:10:15 GMT
- Title: A Probabilistic Method to Predict Classifier Accuracy on Larger Datasets
given Small Pilot Data
- Authors: Ethan Harvey, Wansu Chen, David M. Kent, and Michael C. Hughes
- Abstract summary: Practitioners building classifiers often start with a smaller pilot dataset and plan to grow to larger data in the near future.
Such projects need a toolkit for extrapolating how much accuracy may improve from a 2x, 10x, or 50x increase in data size.
We propose a process model to obtain probabilistic extrapolations of accuracy or similar performance metrics as dataset size increases.
- Score: 4.076366901873452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Practitioners building classifiers often start with a smaller pilot dataset
and plan to grow to larger data in the near future. Such projects need a
toolkit for extrapolating how much classifier accuracy may improve from a 2x,
10x, or 50x increase in data size. While existing work has focused on finding a
single "best-fit" curve using various functional forms like power laws, we
argue that modeling and assessing the uncertainty of predictions is critical
yet has seen less attention. In this paper, we propose a Gaussian process model
to obtain probabilistic extrapolations of accuracy or similar performance
metrics as dataset size increases. We evaluate our approach in terms of error,
likelihood, and coverage across six datasets. Though we focus on medical tasks
and image modalities, our open source approach generalizes to any kind of
classifier.
Related papers
- DataDecide: How to Predict Best Pretraining Data with Small Experiments [67.95896457895404]
We release models, data, and evaluations in DataDecide -- the most extensive open suite of models over differences in data and scale.
We conduct controlled pretraining experiments across 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, model sizes up to 1B parameters, and 3 random seeds.
arXiv Detail & Related papers (2025-04-15T17:02:15Z) - Data Pruning in Generative Diffusion Models [2.0111637969968]
Generative models aim to estimate the underlying distribution of the data, so presumably they should benefit from larger datasets.
We show that eliminating redundant or noisy data in large datasets is beneficial particularly when done strategically.
arXiv Detail & Related papers (2024-11-19T14:13:25Z) - Generative Expansion of Small Datasets: An Expansive Graph Approach [13.053285552524052]
We introduce an Expansive Synthesis model generating large-scale, information-rich datasets from minimal samples.
An autoencoder with self-attention layers and optimal transport refines distributional consistency.
Results show comparable performance, demonstrating the model's potential to augment training data effectively.
arXiv Detail & Related papers (2024-06-25T02:59:02Z) - Scaling Laws for the Value of Individual Data Points in Machine Learning [55.596413470429475]
We introduce a new perspective by investigating scaling behavior for the value of individual data points.
We provide learning theory to support our scaling law, and we observe empirically that it holds across diverse model classes.
Our work represents a first step towards understanding and utilizing scaling properties for the value of individual data points.
arXiv Detail & Related papers (2024-05-30T20:10:24Z) - Project and Probe: Sample-Efficient Domain Adaptation by Interpolating
Orthogonal Features [119.22672589020394]
We propose a lightweight, sample-efficient approach that learns a diverse set of features and adapts to a target distribution by interpolating these features.
Our experiments on four datasets, with multiple distribution shift settings for each, show that Pro$2$ improves performance by 5-15% when given limited target data.
arXiv Detail & Related papers (2023-02-10T18:58:03Z) - Robust self-healing prediction model for high dimensional data [0.685316573653194]
This work proposes a robust self healing (RSH) hybrid prediction model.
It functions by using the data in its entirety by removing errors and inconsistencies from it rather than discarding any data.
The proposed method is compared with some of the existing high performing models and the results are analyzed.
arXiv Detail & Related papers (2022-10-04T17:55:50Z) - Adaptive Cholesky Gaussian Processes [7.684183064816171]
We present a method to fit exact Gaussian process models to large datasets by considering only a subset of the data.
Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead.
arXiv Detail & Related papers (2022-02-22T09:43:46Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Monotonic Cardinality Estimation of Similarity Selection: A Deep
Learning Approach [22.958342743597044]
We investigate the possibilities of utilizing deep learning for cardinality estimation of similarity selection.
We propose a novel and generic method that can be applied to any data type and distance function.
arXiv Detail & Related papers (2020-02-15T20:22:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.