The Fixed Sub-Center: A Better Way to Capture Data Complexity
- URL: http://arxiv.org/abs/2203.12928v1
- Date: Thu, 24 Mar 2022 08:21:28 GMT
- Title: The Fixed Sub-Center: A Better Way to Capture Data Complexity
- Authors: Zhemin Zhang, Xun Gong
- Abstract summary: We propose to use Fixed Sub-Center (F-SC) to create more discrepant sub-centers.
The experimental results show that F-SC significantly improves the accuracy of both image classification and fine-grained recognition tasks.
- Score: 1.583842747998493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Treating class with a single center may hardly capture data distribution
complexities. Using multiple sub-centers is an alternative way to address this
problem. However, highly correlated sub-classes, the classifier's parameters
grow linearly with the number of classes, and lack of intra-class compactness
are three typical issues that need to be addressed in existing multi-subclass
methods. To this end, we propose to use Fixed Sub-Center (F-SC), which allows
the model to create more discrepant sub-centers while saving memory and cutting
computational costs considerably. The F-SC specifically, first samples a class
center Ui for each class from a uniform distribution, and then generates a
normal distribution for each class, where the mean is equal to Ui. Finally, the
sub-centers are sampled based on the normal distribution corresponding to each
class, and the sub-centers are fixed during the training process avoiding the
overhead of gradient calculation. Moreover, F-SC penalizes the Euclidean
distance between the samples and their corresponding sub-centers, it helps
remain intra-compactness. The experimental results show that F-SC significantly
improves the accuracy of both image classification and fine-grained recognition
tasks.
Related papers
- Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Generating Multi-Center Classifier via Conditional Gaussian Distribution [7.77615886942767]
In real-world data, one class can contain several local clusters, e.g., birds of different poses.
We create a conditional Gaussian distribution for each class and then sample multiple sub-centers.
This approach allows the model to capture intra-class local structures more efficiently.
arXiv Detail & Related papers (2024-01-29T08:06:33Z) - Feature Selection using Sparse Adaptive Bottleneck Centroid-Encoder [1.2487990897680423]
We introduce a novel nonlinear model, Sparse Adaptive Bottleneckid-Encoder (SABCE), for determining the features that discriminate between two or more classes.
The algorithm is applied to various real-world data sets, including high-dimensional biological, image, speech, and accelerometer sensor data.
arXiv Detail & Related papers (2023-06-07T21:37:21Z) - Latent Distribution Adjusting for Face Anti-Spoofing [29.204168516602568]
We propose a unified framework called Latent Distribution Adjusting (LDA) to improve the robustness of the face anti-spoofing (FAS) model.
To enhance the intra-class compactness and inter-class discrepancy, we propose a margin-based loss for providing distribution constrains for prototype learning.
Our framework can 1) make the final representation space both intra-class compact and inter-class separable, 2) outperform the state-of-the-art methods on multiple standard FAS benchmarks.
arXiv Detail & Related papers (2023-05-16T08:43:14Z) - Semantic Segmentation via Pixel-to-Center Similarity Calculation [40.62804702162577]
We first rethink semantic segmentation from a perspective of similarity between pixels and class centers.
Under this novel view, we propose a Class Center Similarity layer (CCS layer) to address the above-mentioned challenges.
Our model performs favourably against the state-of-the-art CNN-based methods.
arXiv Detail & Related papers (2023-01-12T08:36:59Z) - Overlapping oriented imbalanced ensemble learning method based on
projective clustering and stagewise hybrid sampling [22.32930261633615]
This paper proposes an ensemble learning algorithm based on dual clustering and stage-wise hybrid sampling (DCSHS)
The major advantage of our algorithm is that it can exploit the intersectionality of the CCS to realize the soft elimination of overlapping majority samples.
arXiv Detail & Related papers (2022-11-30T01:49:06Z) - Prediction Calibration for Generalized Few-shot Semantic Segmentation [101.69940565204816]
Generalized Few-shot Semantic (GFSS) aims to segment each image pixel into either base classes with abundant training examples or novel classes with only a handful of (e.g., 1-5) training images per class.
We build a cross-attention module that guides the classifier's final prediction using the fused multi-level features.
Our PCN outperforms the state-the-art alternatives by large margins.
arXiv Detail & Related papers (2022-10-15T13:30:12Z) - Learnable Distribution Calibration for Few-Shot Class-Incremental
Learning [122.2241120474278]
Few-shot class-incremental learning (FSCIL) faces challenges of memorizing old class distributions and estimating new class distributions given few training samples.
We propose a learnable distribution calibration (LDC) approach, with the aim to systematically solve these two challenges using a unified framework.
arXiv Detail & Related papers (2022-10-01T09:40:26Z) - Unbiased Subclass Regularization for Semi-Supervised Semantic
Segmentation [47.533612505477535]
Semi-supervised semantic segmentation learns from small amounts of labelled images and large amounts of unlabelled images.
This paper presents an unbiased subclass regularization network (USRN) that alleviates the class imbalance issue.
arXiv Detail & Related papers (2022-03-18T15:53:18Z) - Generalized Zero-Shot Learning Via Over-Complete Distribution [79.5140590952889]
We propose to generate an Over-Complete Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both seen and unseen classes.
The effectiveness of the framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot Learning protocols.
arXiv Detail & Related papers (2020-04-01T19:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.