Gradient-based Quadratic Multiform Separation
- URL: http://arxiv.org/abs/2110.13006v2
- Date: Tue, 26 Oct 2021 11:48:01 GMT
- Title: Gradient-based Quadratic Multiform Separation
- Authors: Wen-Teng Chang
- Abstract summary: We focus on Quadratic Multiform Separation (QMS) a classification method recently proposed by Michael Fan et al.
Inspired by QMS, we propose utilizing a QMS-based optimization method, Adam, to obtain a classifier that minimizes the QMS-specific loss function.
Our empirical result shows that QMS performs as good as most classification methods in terms of accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Classification as a supervised learning concept is an important content in
machine learning. It aims at categorizing a set of data into classes. There are
several commonly-used classification methods nowadays such as k-nearest
neighbors, random forest, and support vector machine. Each of them has its own
pros and cons, and none of them is invincible for all kinds of problems. In
this thesis, we focus on Quadratic Multiform Separation (QMS), a classification
method recently proposed by Michael Fan et al. (2019). Its fresh concept, rich
mathematical structure, and innovative definition of loss function set it apart
from the existing classification methods. Inspired by QMS, we propose utilizing
a gradient-based optimization method, Adam, to obtain a classifier that
minimizes the QMS-specific loss function. In addition, we provide suggestions
regarding model tuning through explorations of the relationships between
hyperparameters and accuracies. Our empirical result shows that QMS performs as
good as most classification methods in terms of accuracy. Its superior
performance is almost comparable to those of gradient boosting algorithms that
win massive machine learning competitions.
Related papers
- MISS: Multiclass Interpretable Scoring Systems [13.902264070785986]
We present a machine-learning approach for constructing Multiclass Interpretable Scoring Systems (MISS)
MISS is a fully data-driven methodology for single, sparse, and user-friendly scoring systems for multiclass classification problems.
Results indicate that our approach is competitive with other machine learning models in terms of classification performance metrics and provides well-calibrated class probabilities.
arXiv Detail & Related papers (2024-01-10T10:57:12Z) - Multi-class Support Vector Machine with Maximizing Minimum Margin [67.51047882637688]
Support Vector Machine (SVM) is a prominent machine learning technique widely applied in pattern recognition tasks.
We propose a novel method for multi-class SVM that incorporates pairwise class loss considerations and maximizes the minimum margin.
Empirical evaluations demonstrate the effectiveness and superiority of our proposed method over existing multi-classification methods.
arXiv Detail & Related papers (2023-12-11T18:09:55Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Online Hyperparameter Optimization for Class-Incremental Learning [99.70569355681174]
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase.
An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge.
We propose an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori.
arXiv Detail & Related papers (2023-01-11T17:58:51Z) - Hierarchical classification at multiple operating points [1.520694326234112]
We present an efficient algorithm to produce operating characteristic curves for any method that assigns a score to every class in the hierarchy.
We propose two novel loss functions and show that a soft variant of the structured hinge loss is able to significantly outperform the flat baseline.
arXiv Detail & Related papers (2022-10-19T23:36:16Z) - Rank4Class: A Ranking Formulation for Multiclass Classification [26.47229268790206]
Multiclass classification (MCC) is a fundamental machine learning problem.
We show that it is easy to boost MCC performance with a novel formulation through the lens of ranking.
arXiv Detail & Related papers (2021-12-17T19:22:37Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - A novel embedded min-max approach for feature selection in nonlinear
support vector machine classification [0.0]
We propose an embedded feature selection method based on a min-max optimization problem.
By leveraging duality theory, we equivalently reformulate the min-max problem and solve it without further ado.
The efficiency and usefulness of our approach are tested on several benchmark data sets.
arXiv Detail & Related papers (2020-04-21T09:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.