Global Multiclass Classification and Dataset Construction via
Heterogeneous Local Experts
- URL: http://arxiv.org/abs/2005.10848v3
- Date: Tue, 5 Jan 2021 23:34:36 GMT
- Title: Global Multiclass Classification and Dataset Construction via
Heterogeneous Local Experts
- Authors: Surin Ahn, Ayfer Ozgur and Mert Pilanci
- Abstract summary: We show how to minimize the number of labelers while ensuring the reliability of the resulting dataset.
Experiments with the MNIST and CIFAR-10 datasets demonstrate the favorable accuracy of our aggregation scheme.
- Score: 37.27708297562079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the domains of dataset construction and crowdsourcing, a notable challenge
is to aggregate labels from a heterogeneous set of labelers, each of whom is
potentially an expert in some subset of tasks (and less reliable in others). To
reduce costs of hiring human labelers or training automated labeling systems,
it is of interest to minimize the number of labelers while ensuring the
reliability of the resulting dataset. We model this as the problem of
performing $K$-class classification using the predictions of smaller
classifiers, each trained on a subset of $[K]$, and derive bounds on the number
of classifiers needed to accurately infer the true class of an unlabeled sample
under both adversarial and stochastic assumptions. By exploiting a connection
to the classical set cover problem, we produce a near-optimal scheme for
designing such configurations of classifiers which recovers the well known
one-vs.-one classification approach as a special case. Experiments with the
MNIST and CIFAR-10 datasets demonstrate the favorable accuracy (compared to a
centralized classifier) of our aggregation scheme applied to classifiers
trained on subsets of the data. These results suggest a new way to
automatically label data or adapt an existing set of local classifiers to
larger-scale multiclass problems.
Related papers
- Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Making Binary Classification from Multiple Unlabeled Datasets Almost
Free of Supervision [128.6645627461981]
We propose a new problem setting, i.e., binary classification from multiple unlabeled datasets with only one pairwise numerical relationship of class priors.
In MU-OPPO, we do not need the class priors for all unlabeled datasets.
We show that our framework brings smaller estimation errors of class priors and better performance of binary classification.
arXiv Detail & Related papers (2023-06-12T11:33:46Z) - NorMatch: Matching Normalizing Flows with Discriminative Classifiers for
Semi-Supervised Learning [8.749830466953584]
Semi-Supervised Learning (SSL) aims to learn a model using a tiny labeled set and massive amounts of unlabeled data.
In this work we introduce a new framework for SSL named NorMatch.
We demonstrate, through numerical and visual results, that NorMatch achieves state-of-the-art performance on several datasets.
arXiv Detail & Related papers (2022-11-17T15:39:18Z) - Learning from Multiple Unlabeled Datasets with Partial Risk
Regularization [80.54710259664698]
In this paper, we aim to learn an accurate classifier without any class labels.
We first derive an unbiased estimator of the classification risk that can be estimated from the given unlabeled sets.
We then find that the classifier obtained as such tends to cause overfitting as its empirical risks go negative during training.
Experiments demonstrate that our method effectively mitigates overfitting and outperforms state-of-the-art methods for learning from multiple unlabeled sets.
arXiv Detail & Related papers (2022-07-04T16:22:44Z) - Semi-Supervised Cascaded Clustering for Classification of Noisy Label
Data [0.3441021278275805]
The performance of supervised classification techniques often deteriorates when the data has noisy labels.
Most of the approaches addressing the noisy label data rely on deep neural networks (DNN) that require huge datasets for classification tasks.
We propose a semi-supervised cascaded clustering algorithm to extract patterns and generate a cascaded tree of classes in such datasets.
arXiv Detail & Related papers (2022-05-04T17:42:22Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.