Random Hyperboxes
- URL: http://arxiv.org/abs/2006.00695v4
- Date: Mon, 4 Apr 2022 07:35:21 GMT
- Title: Random Hyperboxes
- Authors: Thanh Tung Khuat, Bogdan Gabrys
- Abstract summary: We show a generalization error bound of the proposed classifier based on the strength of the individual hyperbox-based classifiers.
The effectiveness of the proposed classifier is analyzed using a carefully selected illustrative example.
We identify the existing issues related to the generalization error bounds of the real datasets and inform the potential research directions.
- Score: 9.061408029414455
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a simple yet powerful ensemble classifier, called Random
Hyperboxes, constructed from individual hyperbox-based classifiers trained on
the random subsets of sample and feature spaces of the training set. We also
show a generalization error bound of the proposed classifier based on the
strength of the individual hyperbox-based classifiers as well as the
correlation among them. The effectiveness of the proposed classifier is
analyzed using a carefully selected illustrative example and compared
empirically with other popular single and ensemble classifiers via 20 datasets
using statistical testing methods. The experimental results confirmed that our
proposed method outperformed other fuzzy min-max neural networks, popular
learning algorithms, and is competitive with other ensemble methods. Finally,
we identify the existing issues related to the generalization error bounds of
the real datasets and inform the potential research directions.
Related papers
- Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Statistical Comparisons of Classifiers by Generalized Stochastic
Dominance [0.0]
There is still no consensus on how to compare classifiers over multiple data sets with respect to several criteria.
In this paper, we add a fresh view to the vivid debate by adopting recent developments in decision theory.
We show that our framework ranks classifiers by a generalized concept of dominance, which powerfully circumvents the cumbersome, and often even self-contradictory, reliance on aggregates.
arXiv Detail & Related papers (2022-09-05T09:28:15Z) - Self-Evolutionary Clustering [1.662966122370634]
Most existing deep clustering methods are based on simple distance comparison and highly dependent on the target distribution generated by a handcrafted nonlinear mapping.
A novel modular Self-Evolutionary Clustering (Self-EvoC) framework is constructed, which boosts the clustering performance by classification in a self-supervised manner.
The framework can efficiently discriminate sample outliers and generate better target distribution with the assistance of self-supervised.
arXiv Detail & Related papers (2022-02-21T19:38:18Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Visualizing Classifier Adjacency Relations: A Case Study in Speaker
Verification and Voice Anti-Spoofing [72.4445825335561]
We propose a simple method to derive 2D representation from detection scores produced by an arbitrary set of binary classifiers.
Based upon rank correlations, our method facilitates a visual comparison of classifiers with arbitrary scores.
While the approach is fully versatile and can be applied to any detection task, we demonstrate the method using scores produced by automatic speaker verification and voice anti-spoofing systems.
arXiv Detail & Related papers (2021-06-11T13:03:33Z) - Hyperdimensional Computing for Efficient Distributed Classification with
Randomized Neural Networks [5.942847925681103]
We study distributed classification, which can be employed in situations were data cannot be stored at a central location nor shared.
We propose a more efficient solution for distributed classification by making use of a lossy compression approach applied when sharing the local classifiers with other agents.
arXiv Detail & Related papers (2021-06-02T01:33:56Z) - Learning from Incomplete Features by Simultaneous Training of Neural
Networks and Sparse Coding [24.3769047873156]
This paper addresses the problem of training a classifier on a dataset with incomplete features.
We assume that different subsets of features (random or structured) are available at each data instance.
A new supervised learning method is developed to train a general classifier, using only a subset of features per sample.
arXiv Detail & Related papers (2020-11-28T02:20:39Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.