Metrics for Multi-Class Classification: an Overview
- URL: http://arxiv.org/abs/2008.05756v1
- Date: Thu, 13 Aug 2020 08:41:44 GMT
- Title: Metrics for Multi-Class Classification: an Overview
- Authors: Margherita Grandini, Enrico Bagli, Giorgio Visani
- Abstract summary: Classification tasks involving more than two classes are known as "multi-class classification"
Performance indicators are very useful when the aim is to evaluate and compare different classification models or machine learning techniques.
- Score: 0.9176056742068814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classification tasks in machine learning involving more than two classes are
known by the name of "multi-class classification". Performance indicators are
very useful when the aim is to evaluate and compare different classification
models or machine learning techniques. Many metrics come in handy to test the
ability of a multi-class classifier. Those metrics turn out to be useful at
different stage of the development process, e.g. comparing the performance of
two different models or analysing the behaviour of the same model by tuning
different parameters. In this white paper we review a list of the most
promising multi-class metrics, we highlight their advantages and disadvantages
and show their possible usages during the development of a classification
model.
Related papers
- Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Exploiting Category Names for Few-Shot Classification with
Vision-Language Models [78.51975804319149]
Vision-language foundation models pretrained on large-scale data provide a powerful tool for many visual understanding tasks.
This paper shows that we can significantly improve the performance of few-shot classification by using the category names to initialize the classification head.
arXiv Detail & Related papers (2022-11-29T21:08:46Z) - Not All Instances Contribute Equally: Instance-adaptive Class
Representation Learning for Few-Shot Visual Recognition [94.04041301504567]
Few-shot visual recognition refers to recognize novel visual concepts from a few labeled instances.
We propose a novel metric-based meta-learning framework termed instance-adaptive class representation learning network (ICRL-Net) for few-shot visual recognition.
arXiv Detail & Related papers (2022-09-07T10:00:18Z) - A Similarity-based Framework for Classification Task [21.182406977328267]
Similarity-based method gives rise to a new class of methods for multi-label learning and also achieves promising performance.
We unite similarity-based learning and generalized linear models to achieve the best of both worlds.
arXiv Detail & Related papers (2022-03-05T06:39:50Z) - Learning-From-Disagreement: A Model Comparison and Visual Analytics
Framework [21.055845469999532]
We propose a learning-from-disagreement framework to visually compare two classification models.
Specifically, we train a discriminator to learn from the disagreed instances.
We interpret the trained discriminator with the SHAP values of different meta-features.
arXiv Detail & Related papers (2022-01-19T20:15:35Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Interpretation of multi-label classification models using shapley values [0.5482532589225552]
This work further extends the explanation of multi-label classification task by using the SHAP methodology.
The experiment demonstrates a comprehensive comparision of different algorithms on well known multi-label datasets.
arXiv Detail & Related papers (2021-04-21T12:51:12Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Meta Learning for Few-Shot One-class Classification [0.0]
We formulate the learning of meaningful features for one-class classification as a meta-learning problem.
To learn these representations, we require only multiclass data from similar tasks.
We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario.
arXiv Detail & Related papers (2020-09-11T11:35:28Z) - Adversarial Multi-Binary Neural Network for Multi-class Classification [19.298875915675502]
We use a multi-task framework to address multi-class classification.
We employ adversarial training to distinguish the class-specific features and the class-agnostic features.
arXiv Detail & Related papers (2020-03-25T02:19:17Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.