Optimizing Black-box Metrics with Iterative Example Weighting
- URL: http://arxiv.org/abs/2102.09492v1
- Date: Thu, 18 Feb 2021 17:19:09 GMT
- Title: Optimizing Black-box Metrics with Iterative Example Weighting
- Authors: Gaurush Hiranandani, Jatin Mathur, Oluwasanmi Koyejo, Mahdi Milani
Fard, Harikrishna Narasimhan
- Abstract summary: We consider learning to optimize a classification metric defined by a black-box function of the confusion matrix.
Our approach is to adaptively learn example weights on the training dataset such that the resulting weighted objective best approximates the metric on the validation sample.
- Score: 32.682652530189266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider learning to optimize a classification metric defined by a
black-box function of the confusion matrix. Such black-box learning settings
are ubiquitous, for example, when the learner only has query access to the
metric of interest, or in noisy-label and domain adaptation applications where
the learner must evaluate the metric via performance evaluation using a small
validation sample. Our approach is to adaptively learn example weights on the
training dataset such that the resulting weighted objective best approximates
the metric on the validation sample. We show how to model and estimate the
example weights and use them to iteratively post-shift a pre-trained class
probability estimator to construct a classifier. We also analyze the resulting
procedure's statistical properties. Experiments on various label noise, domain
shift, and fair classification setups confirm that our proposal is better than
the individual state-of-the-art baselines for each application.
Related papers
- Classification Tree-based Active Learning: A Wrapper Approach [4.706932040794696]
This paper proposes a wrapper active learning method for classification, organizing the sampling process into a tree structure.
A classification tree constructed on an initial set of labeled samples is considered to decompose the space into low-entropy regions.
This adaptation proves to be a significant enhancement over existing active learning methods.
arXiv Detail & Related papers (2024-04-15T17:27:00Z) - A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - Learning Acceptance Regions for Many Classes with Anomaly Detection [19.269724165953274]
Many existing set-valued classification methods do not consider the possibility that a new class that never appeared in the training data appears in the test data.
We propose a Generalized Prediction Set (GPS) approach to estimate the acceptance regions while considering the possibility of a new class in the test data.
Unlike previous methods, the proposed method achieves a good balance between accuracy, efficiency, and anomaly detection rate.
arXiv Detail & Related papers (2022-09-20T19:40:33Z) - Classification Performance Metric Elicitation and its Applications [5.5637552942511155]
Despite its practical interest, there is limited formal guidance on how to select metrics for machine learning applications.
This thesis outlines metric elicitation as a principled framework for selecting the performance metric that best reflects implicit user preferences.
arXiv Detail & Related papers (2022-08-19T03:57:17Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - MetricOpt: Learning to Optimize Black-Box Evaluation Metrics [21.608384691401238]
We study the problem of optimizing arbitrary non-differentiable task evaluation metrics such as misclassification rate and recall.
Our method, named MetricOpt, operates in a black-box setting where the computational details of the target metric are unknown.
We achieve this by learning a differentiable value function, which maps compact task-specific model parameters to metric observations.
arXiv Detail & Related papers (2021-04-21T16:50:01Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.