Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using
off-the-Shelf ANNs
- URL: http://arxiv.org/abs/2110.14256v1
- Date: Wed, 27 Oct 2021 08:16:11 GMT
- Title: Cascaded Classifier for Pareto-Optimal Accuracy-Cost Trade-Off Using
off-the-Shelf ANNs
- Authors: Cecilia Latotzke, Johnson Loh, and Tobias Gemmeke
- Abstract summary: We derive a methodology to maximize accuracy and efficiency of cascaded classifiers.
The multi-stage realization can be employed to optimize any state-of-the-art classifier.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine-learning classifiers provide high quality of service in
classification tasks. Research now targets cost reduction measured in terms of
average processing time or energy per solution. Revisiting the concept of
cascaded classifiers, we present a first of its kind analysis of optimal
pass-on criteria between the classifier stages. Based on this analysis, we
derive a methodology to maximize accuracy and efficiency of cascaded
classifiers. On the one hand, our methodology allows cost reduction of 1.32x
while preserving reference classifier's accuracy. On the other hand, it allows
to scale cost over two orders while gracefully degrading accuracy. Thereby, the
final classifier stage sets the top accuracy. Hence, the multi-stage
realization can be employed to optimize any state-of-the-art classifier.
Related papers
- OCCAM: Towards Cost-Efficient and Accuracy-Aware Image Classification Inference [11.267210747162961]
We propose a principled approach, OCCAM, to compute the best classifier assignment strategy over image classification queries.
On a variety of real-world datasets, OCCAM achieves 40% cost reduction with little to no accuracy drop.
arXiv Detail & Related papers (2024-06-06T21:05:39Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - EXACT: How to Train Your Accuracy [6.144680854063938]
We propose a new optimization framework by introducing ascentity to a model's output and optimizing expected accuracy.
Experiments on linear models and deep image classification show that the proposed optimization method is a powerful alternative to widely used classification losses.
arXiv Detail & Related papers (2022-05-19T15:13:00Z) - Label, Verify, Correct: A Simple Few Shot Object Detection Method [93.84801062680786]
We introduce a simple pseudo-labelling method to source high-quality pseudo-annotations from a training set.
We present two novel methods to improve the precision of the pseudo-labelling process.
Our method achieves state-of-the-art or second-best performance compared to existing approaches.
arXiv Detail & Related papers (2021-12-10T18:59:06Z) - Re-Assessing the "Classify and Count" Quantification Method [88.60021378715636]
"Classify and Count" (CC) is often a biased estimator.
Previous works have failed to use properly optimised versions of CC.
We argue that, while still inferior to some cutting-edge methods, they deliver near-state-of-the-art accuracy.
arXiv Detail & Related papers (2020-11-04T21:47:39Z) - Bayesian Few-Shot Classification with One-vs-Each P\'olya-Gamma
Augmented Gaussian Processes [7.6146285961466]
Few-shot classification (FSC) is an important step on the path toward human-like machine learning.
We propose a novel combination of P'olya-Gamma augmentation and the one-vs-each softmax approximation that allows us to efficiently marginalize over functions rather than model parameters.
We demonstrate improved accuracy and uncertainty quantification on both standard few-shot classification benchmarks and few-shot domain transfer tasks.
arXiv Detail & Related papers (2020-07-20T19:10:41Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Better Classifier Calibration for Small Data Sets [0.0]
We show how generating more data for calibration is able to improve calibration algorithm performance.
The proposed approach adds computational cost but considering that the main use case is with small data sets this extra computational cost stays insignificant.
arXiv Detail & Related papers (2020-02-24T12:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.