Learning-From-Disagreement: A Model Comparison and Visual Analytics
Framework
- URL: http://arxiv.org/abs/2201.07849v1
- Date: Wed, 19 Jan 2022 20:15:35 GMT
- Title: Learning-From-Disagreement: A Model Comparison and Visual Analytics
Framework
- Authors: Junpeng Wang, Liang Wang, Yan Zheng, Chin-Chia Michael Yeh, Shubham
Jain, Wei Zhang
- Abstract summary: We propose a learning-from-disagreement framework to visually compare two classification models.
Specifically, we train a discriminator to learn from the disagreed instances.
We interpret the trained discriminator with the SHAP values of different meta-features.
- Score: 21.055845469999532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the fast-growing number of classification models being produced every
day, numerous model interpretation and comparison solutions have also been
introduced. For example, LIME and SHAP can interpret what input features
contribute more to a classifier's output predictions. Different numerical
metrics (e.g., accuracy) can be used to easily compare two classifiers.
However, few works can interpret the contribution of a data feature to a
classifier in comparison with its contribution to another classifier. This
comparative interpretation can help to disclose the fundamental difference
between two classifiers, select classifiers in different feature conditions,
and better ensemble two classifiers. To accomplish it, we propose a
learning-from-disagreement (LFD) framework to visually compare two
classification models. Specifically, LFD identifies data instances with
disagreed predictions from two compared classifiers and trains a discriminator
to learn from the disagreed instances. As the two classifiers' training
features may not be available, we train the discriminator through a set of
meta-features proposed based on certain hypotheses of the classifiers to probe
their behaviors. Interpreting the trained discriminator with the SHAP values of
different meta-features, we provide actionable insights into the compared
classifiers. Also, we introduce multiple metrics to profile the importance of
meta-features from different perspectives. With these metrics, one can easily
identify meta-features with the most complementary behaviors in two
classifiers, and use them to better ensemble the classifiers. We focus on
binary classification models in the financial services and advertising industry
to demonstrate the efficacy of our proposed framework and visualizations.
Related papers
- Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - Visualizing Classifier Adjacency Relations: A Case Study in Speaker
Verification and Voice Anti-Spoofing [72.4445825335561]
We propose a simple method to derive 2D representation from detection scores produced by an arbitrary set of binary classifiers.
Based upon rank correlations, our method facilitates a visual comparison of classifiers with arbitrary scores.
While the approach is fully versatile and can be applied to any detection task, we demonstrate the method using scores produced by automatic speaker verification and voice anti-spoofing systems.
arXiv Detail & Related papers (2021-06-11T13:03:33Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Meta Learning for Few-Shot One-class Classification [0.0]
We formulate the learning of meaningful features for one-class classification as a meta-learning problem.
To learn these representations, we require only multiclass data from similar tasks.
We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario.
arXiv Detail & Related papers (2020-09-11T11:35:28Z) - Metrics for Multi-Class Classification: an Overview [0.9176056742068814]
Classification tasks involving more than two classes are known as "multi-class classification"
Performance indicators are very useful when the aim is to evaluate and compare different classification models or machine learning techniques.
arXiv Detail & Related papers (2020-08-13T08:41:44Z) - Diversity-Aware Weighted Majority Vote Classifier for Imbalanced Data [1.2944868613449219]
We propose a diversity-aware ensemble learning based algorithm, DAMVI, to deal with imbalanced binary classification tasks.
We show efficiency of the proposed approach with respect to state-of-art models on predictive maintenance task, credit card fraud detection, webpage classification and medical applications.
arXiv Detail & Related papers (2020-04-16T11:27:50Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.