Counterfactual Zero-Shot and Open-Set Visual Recognition
- URL: http://arxiv.org/abs/2103.00887v1
- Date: Mon, 1 Mar 2021 10:20:04 GMT
- Title: Counterfactual Zero-Shot and Open-Set Visual Recognition
- Authors: Zhongqi Yue, Tan Wang, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua
- Abstract summary: We present a novel counterfactual framework for both Zero-Shot Learning (ZSL) and Open-Set Recognition (OSR)
Our idea stems from the observation that the generated samples for unseen-classes are often out of the true distribution.
We demonstrate that our framework effectively mitigates the seen/unseen imbalance and hence significantly improves the overall performance.
- Score: 95.43275761833804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel counterfactual framework for both Zero-Shot Learning (ZSL)
and Open-Set Recognition (OSR), whose common challenge is generalizing to the
unseen-classes by only training on the seen-classes. Our idea stems from the
observation that the generated samples for unseen-classes are often out of the
true distribution, which causes severe recognition rate imbalance between the
seen-class (high) and unseen-class (low). We show that the key reason is that
the generation is not Counterfactual Faithful, and thus we propose a faithful
one, whose generation is from the sample-specific counterfactual question: What
would the sample look like, if we set its class attribute to a certain class,
while keeping its sample attribute unchanged? Thanks to the faithfulness, we
can apply the Consistency Rule to perform unseen/seen binary classification, by
asking: Would its counterfactual still look like itself? If ``yes'', the sample
is from a certain class, and ``no'' otherwise. Through extensive experiments on
ZSL and OSR, we demonstrate that our framework effectively mitigates the
seen/unseen imbalance and hence significantly improves the overall performance.
Note that this framework is orthogonal to existing methods, thus, it can serve
as a new baseline to evaluate how ZSL/OSR models generalize. Codes are
available at https://github.com/yue-zhongqi/gcm-cf.
Related papers
- Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers [12.121885324463388]
We propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL)
DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD)
With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the
arXiv Detail & Related papers (2022-01-05T07:16:34Z) - Classifier Crafting: Turn Your ConvNet into a Zero-Shot Learner! [5.3556221126231085]
We tackle Zero-shot learning (ZSL) by casting a convolutional neural network into a zero-shot learner.
We learn a data-driven and ZSL-tailored feature representation on seen classes only to match these fixed classification rules.
We can perform ZSL inference by augmenting the pool of classification rules at test time while keeping the very same representation we learnt.
arXiv Detail & Related papers (2021-03-20T06:26:29Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - A Boundary Based Out-of-Distribution Classifier for Generalized
Zero-Shot Learning [83.1490247844899]
Generalized Zero-Shot Learning (GZSL) is a challenging topic that has promising prospects in many realistic scenarios.
We propose a boundary based Out-of-Distribution (OOD) classifier which classifies the unseen and seen domains by only using seen samples for training.
We extensively validate our approach on five popular benchmark datasets including AWA1, AWA2, CUB, FLO and SUN.
arXiv Detail & Related papers (2020-08-09T11:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.