Ensemble of Binary Classifiers Combined Using Recurrent Correlation
Associative Memories
- URL: http://arxiv.org/abs/2009.08578v1
- Date: Fri, 18 Sep 2020 01:16:53 GMT
- Title: Ensemble of Binary Classifiers Combined Using Recurrent Correlation
Associative Memories
- Authors: Rodolfo Anibal Lobo and Marcos Eduardo Valle
- Abstract summary: The majority vote is an example of a methodology used to combine classifiers in an ensemble method.
We introduce ensemble methods based on recurrent correlation associative memories for binary classification problems.
- Score: 1.3706331473063877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An ensemble method should cleverly combine a group of base classifiers to
yield an improved classifier. The majority vote is an example of a methodology
used to combine classifiers in an ensemble method. In this paper, we propose to
combine classifiers using an associative memory model. Precisely, we introduce
ensemble methods based on recurrent correlation associative memories (RCAMs)
for binary classification problems. We show that an RCAM-based ensemble
classifier can be viewed as a majority vote classifier whose weights depend on
the similarity between the base classifiers and the resulting ensemble method.
More precisely, the RCAM-based ensemble combines the classifiers using a
recurrent consult and vote scheme. Furthermore, computational experiments
confirm the potential application of the RCAM-based ensemble method for binary
classification problems.
Related papers
- Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Probability-driven scoring functions in combining linear classifiers [0.913755431537592]
This research is aimed at building a new fusion method dedicated to the ensemble of linear classifiers.
The proposed fusion method is compared with the reference method using multiple benchmark datasets taken from the KEEL repository.
The experimental study shows that, under certain conditions, some improvement may be obtained.
arXiv Detail & Related papers (2021-09-16T08:58:32Z) - Relearning ensemble selection based on new generated features [0.0]
The proposed technique was compared with state-of-the-art ensemble methods using three benchmark datasets and one synthetic dataset.
Four classification performance measures are used to evaluate the proposed method.
arXiv Detail & Related papers (2021-06-12T12:45:32Z) - Visualizing Classifier Adjacency Relations: A Case Study in Speaker
Verification and Voice Anti-Spoofing [72.4445825335561]
We propose a simple method to derive 2D representation from detection scores produced by an arbitrary set of binary classifiers.
Based upon rank correlations, our method facilitates a visual comparison of classifiers with arbitrary scores.
While the approach is fully versatile and can be applied to any detection task, we demonstrate the method using scores produced by automatic speaker verification and voice anti-spoofing systems.
arXiv Detail & Related papers (2021-06-11T13:03:33Z) - CAC: A Clustering Based Framework for Classification [20.372627144885158]
We design a simple, efficient, and generic framework called Classification Aware Clustering (CAC)
Our experiments on synthetic and real benchmark datasets demonstrate the efficacy of CAC over previous methods for combined clustering and classification.
arXiv Detail & Related papers (2021-02-23T18:59:39Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Clustering Ensemble Meets Low-rank Tensor Approximation [50.21581880045667]
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one.
We propose a novel low-rank tensor approximation-based method to solve the problem from a global perspective.
Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods.
arXiv Detail & Related papers (2020-12-16T13:01:37Z) - A new interval-based aggregation approach based on bagging and Interval
Agreement Approach (IAA) in ensemble learning [0.0]
This paper focuses on the classifiers outputs aggregation step and presents a new interval-based aggregation modeling using bagging resampling approach and Interval Agreement Approach (IAA) in ensemble learning.
In this paper, in addition to implementing a new aggregation approach in ensemble learning, we designed some experiments to encourage researchers to use interval modeling in ensemble learning.
arXiv Detail & Related papers (2020-12-15T09:33:12Z) - Open-Set Recognition with Gaussian Mixture Variational Autoencoders [91.3247063132127]
In inference, open-set classification is to either classify a sample into a known class from training or reject it as an unknown class.
We train our model to cooperatively learn reconstruction and perform class-based clustering in the latent space.
Our model achieves more accurate and robust open-set classification results, with an average F1 improvement of 29.5%.
arXiv Detail & Related papers (2020-06-03T01:15:19Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.