Evaluating Nonlinear Decision Trees for Binary Classification Tasks with
Other Existing Methods
- URL: http://arxiv.org/abs/2008.10753v1
- Date: Tue, 25 Aug 2020 00:00:23 GMT
- Title: Evaluating Nonlinear Decision Trees for Binary Classification Tasks with
Other Existing Methods
- Authors: Yashesh Dhebar, Sparsh Gupta and Kalyanmoy Deb
- Abstract summary: Classification of datasets into two or more distinct classes is an important machine learning task.
Many methods are able to classify binary classification tasks with a very high accuracy on test data, but cannot provide any easily interpretable explanation.
We highlight and evaluate a recently proposed nonlinear decision tree approach with a number of commonly used classification methods on a number of datasets.
- Score: 8.870380386952993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classification of datasets into two or more distinct classes is an important
machine learning task. Many methods are able to classify binary classification
tasks with a very high accuracy on test data, but cannot provide any easily
interpretable explanation for users to have a deeper understanding of reasons
for the split of data into two classes. In this paper, we highlight and
evaluate a recently proposed nonlinear decision tree approach with a number of
commonly used classification methods on a number of datasets involving a few to
a large number of features. The study reveals key issues such as effect of
classification on the method's parameter values, complexity of the classifier
versus achieved accuracy, and interpretability of resulting classifiers.
Related papers
- Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Review of Methods for Handling Class-Imbalanced in Classification
Problems [0.0]
In some cases, one class contains the majority of examples while the other, which is frequently the more important class, is nevertheless represented by a smaller proportion of examples.
The article examines the most widely used methods for addressing the problem of learning with a class imbalance, including data-level, algorithm-level, hybrid, cost-sensitive learning, and deep learning.
arXiv Detail & Related papers (2022-11-10T10:07:10Z) - Determination of class-specific variables in nonparametric
multiple-class classification [0.0]
We propose a probability-based nonparametric multiple-class classification method, and integrate it with the ability of identifying high impact variables for individual class.
We report the properties of the proposed method, and use both synthesized and real data sets to illustrate its properties under different classification situations.
arXiv Detail & Related papers (2022-05-07T10:08:58Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Reducing Data Complexity using Autoencoders with Class-informed Loss
Functions [14.541733758283355]
This paper proposes an autoencoder-based approach to complexity reduction, using class labels in order to inform the loss function.
A thorough experimentation across a collection of 27 datasets shows that class-informed autoencoders perform better than 4 other popular unsupervised feature extraction techniques.
arXiv Detail & Related papers (2021-11-11T10:57:19Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - Class Introspection: A Novel Technique for Detecting Unlabeled
Subclasses by Leveraging Classifier Explainability Methods [0.0]
latent structure is a crucial step in performing analysis of a dataset.
By leveraging instance explanation methods, an existing classifier can be extended to detect latent classes.
This paper also contains a pipeline for analyzing classifiers automatically, and a web application for interactively exploring the results from this technique.
arXiv Detail & Related papers (2021-07-04T14:58:29Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.