Generating Multi-Center Classifier via Conditional Gaussian Distribution
- URL: http://arxiv.org/abs/2401.15942v1
- Date: Mon, 29 Jan 2024 08:06:33 GMT
- Title: Generating Multi-Center Classifier via Conditional Gaussian Distribution
- Authors: Zhemin Zhang, Xun Gong
- Abstract summary: In real-world data, one class can contain several local clusters, e.g., birds of different poses.
We create a conditional Gaussian distribution for each class and then sample multiple sub-centers.
This approach allows the model to capture intra-class local structures more efficiently.
- Score: 7.77615886942767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The linear classifier is widely used in various image classification tasks.
It works by optimizing the distance between a sample and its corresponding
class center. However, in real-world data, one class can contain several local
clusters, e.g., birds of different poses. To address this complexity, we
propose a novel multi-center classifier. Different from the vanilla linear
classifier, our proposal is established on the assumption that the deep
features of the training set follow a Gaussian Mixture distribution.
Specifically, we create a conditional Gaussian distribution for each class and
then sample multiple sub-centers from that distribution to extend the linear
classifier. This approach allows the model to capture intra-class local
structures more efficiently. In addition, at test time we set the mean of the
conditional Gaussian distribution as the class center of the linear classifier
and follow the vanilla linear classifier outputs, thus requiring no additional
parameters or computational overhead. Extensive experiments on image
classification show that the proposed multi-center classifier is a powerful
alternative to widely used linear classifiers. Code available at
https://github.com/ZheminZhang1/MultiCenter-Classifier.
Related papers
- Benign Overfitting and the Geometry of the Ridge Regression Solution in Binary Classification [75.01389991485098]
We show that ridge regression has qualitatively different behavior depending on the scale of the cluster mean vector.
In regimes where the scale is very large, the conditions that allow for benign overfitting turn out to be the same as those for the regression task.
arXiv Detail & Related papers (2025-03-11T01:45:42Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - An Upper Bound for the Distribution Overlap Index and Its Applications [18.481370450591317]
This paper proposes an easy-to-compute upper bound for the overlap index between two probability distributions.
The proposed bound shows its value in one-class classification and domain shift analysis.
Our work shows significant promise toward broadening the applications of overlap-based metrics.
arXiv Detail & Related papers (2022-12-16T20:02:03Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Prediction Calibration for Generalized Few-shot Semantic Segmentation [101.69940565204816]
Generalized Few-shot Semantic (GFSS) aims to segment each image pixel into either base classes with abundant training examples or novel classes with only a handful of (e.g., 1-5) training images per class.
We build a cross-attention module that guides the classifier's final prediction using the fused multi-level features.
Our PCN outperforms the state-the-art alternatives by large margins.
arXiv Detail & Related papers (2022-10-15T13:30:12Z) - The Fixed Sub-Center: A Better Way to Capture Data Complexity [1.583842747998493]
We propose to use Fixed Sub-Center (F-SC) to create more discrepant sub-centers.
The experimental results show that F-SC significantly improves the accuracy of both image classification and fine-grained recognition tasks.
arXiv Detail & Related papers (2022-03-24T08:21:28Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z) - Approximation and generalization properties of the random projection classification method [0.4604003661048266]
We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature.
For certain classification problems (e.g., those with a large Rashomon ratio), there is a potntially large gain in generalization properties by selecting parameters at random.
arXiv Detail & Related papers (2021-08-11T23:14:46Z) - Optimal Linear Combination of Classifiers [0.0]
The question of whether to use one classifier or a combination of classifiers is a central topic in Machine Learning.
We propose here a method for finding an optimal linear combination of classifiers derived from a bias-variance framework for the classification task.
arXiv Detail & Related papers (2021-03-01T16:21:40Z) - Minimum Variance Embedded Auto-associative Kernel Extreme Learning
Machine for One-class Classification [1.4146420810689422]
VAAKELM is a novel extension of an auto-associative kernel extreme learning machine.
It embeds minimum variance information within its architecture and reduces the intra-class variance.
It follows a reconstruction-based approach to one-class classification and minimizes the reconstruction error.
arXiv Detail & Related papers (2020-11-24T17:00:30Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Federated Learning with Only Positive Labels [71.63836379169315]
We propose a generic framework for training with only positive labels, namely Federated Averaging with Spreadout (FedAwS)
We show, both theoretically and empirically, that FedAwS can almost match the performance of conventional learning where users have access to negative labels.
arXiv Detail & Related papers (2020-04-21T23:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.