Rebalanced Zero-shot Learning
- URL: http://arxiv.org/abs/2210.07031v2
- Date: Thu, 13 Jul 2023 14:52:12 GMT
- Title: Rebalanced Zero-shot Learning
- Authors: Zihan Ye, Guanyu Yang, Xiaobo Jin, Youfa Liu, Kaizhu Huang
- Abstract summary: Zero-shot learning (ZSL) aims to identify unseen classes with zero samples during training.
We introduce an imbalanced learning framework into ZSL.
We then propose a re-weighted loss termed Re-balanced Mean-Squared Error (ReMSE)
- Score: 18.52913434532522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot learning (ZSL) aims to identify unseen classes with zero samples
during training. Broadly speaking, present ZSL methods usually adopt
class-level semantic labels and compare them with instance-level semantic
predictions to infer unseen classes. However, we find that such existing models
mostly produce imbalanced semantic predictions, i.e. these models could perform
precisely for some semantics, but may not for others. To address the drawback,
we aim to introduce an imbalanced learning framework into ZSL. However, we find
that imbalanced ZSL has two unique challenges: (1) Its imbalanced predictions
are highly correlated with the value of semantic labels rather than the number
of samples as typically considered in the traditional imbalanced learning; (2)
Different semantics follow quite different error distributions between classes.
To mitigate these issues, we first formalize ZSL as an imbalanced regression
problem which offers empirical evidences to interpret how semantic labels lead
to imbalanced semantic predictions. We then propose a re-weighted loss termed
Re-balanced Mean-Squared Error (ReMSE), which tracks the mean and variance of
error distributions, thus ensuring rebalanced learning across classes. As a
major contribution, we conduct a series of analyses showing that ReMSE is
theoretically well established. Extensive experiments demonstrate that the
proposed method effectively alleviates the imbalance in semantic prediction and
outperforms many state-of-the-art ZSL methods. Our code is available at
https://github.com/FouriYe/ReZSL-TIP23.
Related papers
- Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial
Expression Recognition [39.08466869516571]
Facial expression data is characterized by a significant imbalance, with most collected data showing happy or neutral expressions and fewer instances of fear or disgust.
This imbalance poses challenges to facial expression recognition (FER) models, hindering their ability to fully understand various human emotional states.
Existing FER methods typically report overall accuracy on highly imbalanced test sets but exhibit low performance in terms of the mean accuracy across all expression classes.
arXiv Detail & Related papers (2023-10-30T15:26:26Z) - An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised
Learning [103.65758569417702]
Semi-supervised learning (SSL) has shown great promise in leveraging unlabeled data to improve model performance.
We consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.
We study a simple yet overlooked baseline -- SimiS -- which tackles data imbalance by simply supplementing labeled data with pseudo-labels.
arXiv Detail & Related papers (2022-11-20T21:18:41Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced
Semi-Supervised Learning [80.05441565830726]
This paper addresses imbalanced semi-supervised learning, where heavily biased pseudo-labels can harm the model performance.
We propose a general pseudo-labeling framework to address the bias motivated by this observation.
We term the novel pseudo-labeling framework for imbalanced SSL as Distribution-Aware Semantics-Oriented (DASO) Pseudo-label.
arXiv Detail & Related papers (2021-06-10T11:58:25Z) - Distribution Aligning Refinery of Pseudo-label for Imbalanced
Semi-supervised Learning [126.31716228319902]
We develop Distribution Aligning Refinery of Pseudo-label (DARP) algorithm.
We show that DARP is provably and efficiently compatible with state-of-the-art SSL schemes.
arXiv Detail & Related papers (2020-07-17T09:16:05Z) - A Probabilistic Model for Discriminative and Neuro-Symbolic
Semi-Supervised Learning [6.789370732159177]
We present a probabilistic model for discriminative SSL, that mirrors its classical generative counterpart.
We show several well-known SSL methods can be interpreted as approximating this prior, and can be improved upon.
We extend the discriminative model to neuro-symbolic SSL, where label features satisfy logical rules, by showing such rules relate directly to the above prior.
arXiv Detail & Related papers (2020-06-10T15:30:54Z) - Class-Imbalanced Semi-Supervised Learning [33.94685366079589]
Semi-Supervised Learning (SSL) has achieved great success in overcoming the difficulties of labeling and making full use of unlabeled data.
We introduce a task of class-imbalanced semi-supervised learning (CISSL), which refers to semi-supervised learning with class-imbalanced data.
Our method shows better performance than the conventional methods in the CISSL environment.
arXiv Detail & Related papers (2020-02-17T07:48:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.