Unlocking Unlabeled Data: Ensemble Learning with the Hui- Walter
Paradigm for Performance Estimation in Online and Static Settings
- URL: http://arxiv.org/abs/2401.09376v1
- Date: Wed, 17 Jan 2024 17:46:10 GMT
- Title: Unlocking Unlabeled Data: Ensemble Learning with the Hui- Walter
Paradigm for Performance Estimation in Online and Static Settings
- Authors: Kevin Slote, Elaine Lee
- Abstract summary: We adapt the Hui-Walter paradigm, a method traditionally applied in epidemiology and medicine, to the field of machine learning.
We estimate key performance metrics such as false positive rate, false negative rate, and priors in scenarios where no ground truth is available.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the realm of machine learning and statistical modeling, practitioners
often work under the assumption of accessible, static, labeled data for
evaluation and training. However, this assumption often deviates from reality
where data may be private, encrypted, difficult- to-measure, or unlabeled. In
this paper, we bridge this gap by adapting the Hui-Walter paradigm, a method
traditionally applied in epidemiology and medicine, to the field of machine
learning. This approach enables us to estimate key performance metrics such as
false positive rate, false negative rate, and priors in scenarios where no
ground truth is available. We further extend this paradigm for handling online
data, opening up new possibilities for dynamic data environments. Our
methodology involves partitioning data into latent classes to simulate multiple
data populations (if natural populations are unavailable) and independently
training models to replicate multiple tests. By cross-tabulating binary
outcomes across ensemble categorizers and multiple populations, we are able to
estimate unknown parameters through Gibbs sampling, eliminating the need for
ground-truth or labeled data. This paper showcases the potential of our
methodology to transform machine learning practices by allowing for accurate
model assessment under dynamic and uncertain data conditions.
Related papers
- Stochastic Amortization: A Unified Approach to Accelerate Feature and
Data Attribution [67.28273187033693]
We show that training a network that directly predicts the desired output, known as amortization, is inexpensive and surprisingly effective.
This approach significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches.
arXiv Detail & Related papers (2024-01-29T03:42:37Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - A Data-Driven Method for Automated Data Superposition with Applications
in Soft Matter Science [0.0]
We develop a data-driven, non-parametric method for superposing experimental data with arbitrary coordinate transformations.
Our method produces interpretable data-driven models that may inform applications such as materials classification, design, and discovery.
arXiv Detail & Related papers (2022-04-20T14:58:04Z) - Data-SUITE: Data-centric identification of in-distribution incongruous
examples [81.21462458089142]
Data-SUITE is a data-centric framework to identify incongruous regions of in-distribution (ID) data.
We empirically validate Data-SUITE's performance and coverage guarantees.
arXiv Detail & Related papers (2022-02-17T18:58:31Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Managing dataset shift by adversarial validation for credit scoring [5.560471251954645]
The inconsistency between the distribution of training data and the data that actually needs to be predicted is likely to cause poor model performance.
We propose a method based on adversarial validation to alleviate the dataset shift problem in credit scoring scenarios.
arXiv Detail & Related papers (2021-12-19T07:07:15Z) - Evaluating Predictive Uncertainty and Robustness to Distributional Shift
Using Real World Data [0.0]
We propose metrics for general regression tasks using the Shifts Weather Prediction dataset.
We also present an evaluation of the baseline methods using these metrics.
arXiv Detail & Related papers (2021-11-08T17:32:10Z) - Self Training with Ensemble of Teacher Models [8.257085583227695]
In order to train robust deep learning models, large amounts of labelled data is required.
In the absence of such large repositories of labelled data, unlabeled data can be exploited for the same.
Semi-Supervised learning aims to utilize such unlabeled data for training classification models.
arXiv Detail & Related papers (2021-07-17T09:44:09Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.