Class-Level Logit Perturbation
- URL: http://arxiv.org/abs/2209.05668v1
- Date: Tue, 13 Sep 2022 00:49:32 GMT
- Title: Class-Level Logit Perturbation
- Authors: Mengyang Li (1), Fengguang Su (1), Ou Wu (1), Ji Zhang (2) ((1)
National Center for Applied Mathematics, Tianjin University, (2) University
of Southern Queensland)
- Abstract summary: Feature perturbation and label perturbation have been proven to be useful in various deep learning approaches.
New methodologies are proposed to explicitly learn to perturb logits for both single-label and multi-label classification tasks.
As it only perturbs on logit, it can be used as a plug-in to fuse with any existing classification algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Features, logits, and labels are the three primary data when a sample passes
through a deep neural network. Feature perturbation and label perturbation
receive increasing attention in recent years. They have been proven to be
useful in various deep learning approaches. For example, (adversarial) feature
perturbation can improve the robustness or even generalization capability of
learned models. However, limited studies have explicitly explored for the
perturbation of logit vectors. This work discusses several existing methods
related to class-level logit perturbation. A unified viewpoint between
positive/negative data augmentation and loss variations incurred by logit
perturbation is established. A theoretical analysis is provided to illuminate
why class-level logit perturbation is useful. Accordingly, new methodologies
are proposed to explicitly learn to perturb logits for both single-label and
multi-label classification tasks. Extensive experiments on benchmark image
classification data sets and their long-tail versions indicated the competitive
performance of our learning method. As it only perturbs on logit, it can be
used as a plug-in to fuse with any existing classification algorithms. All the
codes are available at https://github.com/limengyang1992/lpl.
Related papers
- Open-World Semi-Supervised Learning for Node Classification [53.07866559269709]
Open-world semi-supervised learning (Open-world SSL) for node classification is a practical but under-explored problem in the graph community.
We propose an IMbalance-Aware method named OpenIMA for Open-world semi-supervised node classification.
arXiv Detail & Related papers (2024-03-18T05:12:54Z) - Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition [37.62659619941791]
We study the problem of long-tailed visual recognition from the perspective of feature level.
Two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.
Experiments conducted on benchmark datasets demonstrate the superior performance of the proposed method over the state-of-the-art ones.
arXiv Detail & Related papers (2023-05-18T02:06:06Z) - Informative regularization for a multi-layer perceptron RR Lyrae
classifier under data shift [3.303002683812084]
We propose a scalable and easily adaptable approach based on an informative regularization and an ad-hoc training procedure to mitigate the shift problem.
Our method provides a new path to incorporate knowledge from characteristic features into artificial neural networks to manage the underlying data shift problem.
arXiv Detail & Related papers (2023-03-12T02:49:19Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Robust Long-Tailed Learning under Label Noise [50.00837134041317]
This work investigates the label noise problem under long-tailed label distribution.
We propose a robust framework,algo, that realizes noise detection for long-tailed learning.
Our framework can naturally leverage semi-supervised learning algorithms to further improve the generalisation.
arXiv Detail & Related papers (2021-08-26T03:45:00Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.