An ensemble learning framework based on group decision making
- URL: http://arxiv.org/abs/2007.01167v1
- Date: Wed, 1 Jul 2020 13:18:34 GMT
- Title: An ensemble learning framework based on group decision making
- Authors: Jingyi He, Xiaojun Zhou, Rundong Zhang, Chunhua Yang
- Abstract summary: A framework for the ensemble learning (EL) method based on group decision making (GDM) has been proposed to resolve this issue.
In this framework, base learners can be considered as decision-makers, different categories can be seen as alternatives, and the precision, recall, and accuracy which can reflect the performances of the classification methods can be employed.
- Score: 7.906702226082627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classification problem is a significant topic in machine learning which
aims to teach machines how to group together data by particular criteria. In
this paper, a framework for the ensemble learning (EL) method based on group
decision making (GDM) has been proposed to resolve this issue. In this
framework, base learners can be considered as decision-makers, different
categories can be seen as alternatives, classification results obtained by
diverse base learners can be considered as performance ratings, and the
precision, recall, and accuracy which can reflect the performances of the
classification methods can be employed to identify the weights of
decision-makers in GDM. Moreover, considering that the precision and recall
defined in binary classification problems can not be used directly in the
multi-classification problem, the One vs Rest (OvR) has been proposed to obtain
the precision and recall of the base learner for each category. The
experimental results demonstrate that the proposed EL method based on GDM has
higher accuracy than other 6 current popular classification methods in most
instances, which verifies the effectiveness of the proposed method.
Related papers
- A Universal Unbiased Method for Classification from Aggregate
Observations [115.20235020903992]
This paper presents a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses.
Our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses.
arXiv Detail & Related papers (2023-06-20T07:22:01Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments [54.172993875654015]
The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments.
One-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees.
For strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error rates in terms of the sample size.
arXiv Detail & Related papers (2022-09-22T09:04:10Z) - An Evolutionary Approach for Creating of Diverse Classifier Ensembles [11.540822622379176]
We propose a framework for classifier selection and fusion based on a four-step protocol called CIF-E.
We implement and evaluate 24 varied ensemble approaches following the proposed CIF-E protocol.
Experiments show that the proposed evolutionary approach can outperform the state-of-the-art literature approaches in many well-known UCI datasets.
arXiv Detail & Related papers (2022-08-23T14:23:27Z) - Building an Ensemble of Classifiers via Randomized Models of Ensemble
Members [1.827510863075184]
In this paper, a novel randomized model of base classifier is developed.
In the proposed method, the random operation of the model results from a random selection of the learning set from the family of learning sets of a fixed size.
The DES scheme with the proposed model of competence was experimentally evaluated on the collection of 67 benchmark datasets.
arXiv Detail & Related papers (2021-09-16T10:53:13Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Unbiased Subdata Selection for Fair Classification: A Unified Framework
and Scalable Algorithms [0.8376091455761261]
We show that many classification models within this framework can be recast as mixed-integer convex programs.
We then show that in the proposed problem, when the classification outcomes, "unsolvable subdata selection," is strongly-solvable.
This motivates us to develop an iterative refining strategy (IRS) to solve the classification instances.
arXiv Detail & Related papers (2020-12-22T21:09:38Z) - SOAR: Simultaneous Or of And Rules for Classification of Positive &
Negative Classes [0.0]
We present a novel and complete taxonomy of classifications that clearly capture and quantify the inherent ambiguity in noisy binary classifications in the real world.
We show that this approach leads to a more granular formulation of the likelihood model and a simulated-annealing based optimization achieves classification performance competitive with comparable techniques.
arXiv Detail & Related papers (2020-08-25T20:00:27Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Diversity-Aware Weighted Majority Vote Classifier for Imbalanced Data [1.2944868613449219]
We propose a diversity-aware ensemble learning based algorithm, DAMVI, to deal with imbalanced binary classification tasks.
We show efficiency of the proposed approach with respect to state-of-art models on predictive maintenance task, credit card fraud detection, webpage classification and medical applications.
arXiv Detail & Related papers (2020-04-16T11:27:50Z) - Learning to Select Base Classes for Few-shot Classification [96.92372639495551]
We use the Similarity Ratio as an indicator for the generalization performance of a few-shot model.
We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio.
arXiv Detail & Related papers (2020-04-01T09:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.