Linear Classifier Combination via Multiple Potential Functions
- URL: http://arxiv.org/abs/2010.00844v1
- Date: Fri, 2 Oct 2020 08:11:51 GMT
- Title: Linear Classifier Combination via Multiple Potential Functions
- Authors: Pawel Trajdos, Robert Burduk
- Abstract summary: We propose a novel concept of calculating a scoring function based on the distance of the object from the decision boundary and its distance to the class centroid.
An important property is that the proposed score function has the same nature for all linear base classifiers.
- Score: 0.6091702876917279
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A vital aspect of the classification based model construction process is the
calibration of the scoring function. One of the weaknesses of the calibration
process is that it does not take into account the information about the
relative positions of the recognized objects in the feature space. To alleviate
this limitation, in this paper, we propose a novel concept of calculating a
scoring function based on the distance of the object from the decision boundary
and its distance to the class centroid. An important property is that the
proposed score function has the same nature for all linear base classifiers,
which means that outputs of these classifiers are equally represented and have
the same meaning. The proposed approach is compared with other ensemble
algorithms and experiments on multiple Keel datasets demonstrate the
effectiveness of our method. To discuss the results of our experiments, we use
multiple classification performance measures and statistical analysis.
Related papers
- Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Supervised Feature Compression based on Counterfactual Analysis [3.2458225810390284]
This work aims to leverage Counterfactual Explanations to detect the important decision boundaries of a pre-trained black-box model.
Using the discretized dataset, an optimal Decision Tree can be trained that resembles the black-box model, but that is interpretable and compact.
arXiv Detail & Related papers (2022-11-17T21:16:14Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - How Nonconformity Functions and Difficulty of Datasets Impact the
Efficiency of Conformal Classifiers [0.1611401281366893]
In conformal classification, the systems can output multiple class labels instead of one.
For a Neural Network-based conformal classifier, the inverse probability allows minimizing the average number of predicted labels.
We propose a successful method to combine the properties of these two nonconformity functions.
arXiv Detail & Related papers (2021-08-12T11:50:12Z) - Eigen Analysis of Self-Attention and its Reconstruction from Partial
Computation [58.80806716024701]
We study the global structure of attention scores computed using dot-product based self-attention.
We find that most of the variation among attention scores lie in a low-dimensional eigenspace.
We propose to compute scores only for a partial subset of token pairs, and use them to estimate scores for the remaining pairs.
arXiv Detail & Related papers (2021-06-16T14:38:42Z) - The role of feature space in atomistic learning [62.997667081978825]
Physically-inspired descriptors play a key role in the application of machine-learning techniques to atomistic simulations.
We introduce a framework to compare different sets of descriptors, and different ways of transforming them by means of metrics and kernels.
We compare representations built in terms of n-body correlations of the atom density, quantitatively assessing the information loss associated with the use of low-order features.
arXiv Detail & Related papers (2020-09-06T14:12:09Z) - Machine Learning with the Sugeno Integral: The Case of Binary
Classification [5.806154304561782]
We elaborate on the use of the Sugeno integral in the context of machine learning.
We propose a method for binary classification, in which the Sugeno integral is used as an aggregation function.
Due to the specific nature of the Sugeno integral, this approach is especially suitable for learning from ordinal data.
arXiv Detail & Related papers (2020-07-06T20:22:01Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Deep Inverse Feature Learning: A Representation Learning of Error [6.5358895450258325]
This paper introduces a novel perspective about error in machine learning and proposes inverse feature learning (IFL) as a representation learning approach.
Inverse feature learning method operates based on a deep clustering approach to obtain a qualitative form of the representation of error as features.
The experimental results show that the proposed method leads to promising results in classification and especially in clustering.
arXiv Detail & Related papers (2020-03-09T17:45:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.