Bias-Awareness for Zero-Shot Learning the Seen and Unseen
- URL: http://arxiv.org/abs/2008.11185v1
- Date: Tue, 25 Aug 2020 17:38:40 GMT
- Title: Bias-Awareness for Zero-Shot Learning the Seen and Unseen
- Authors: William Thong and Cees G.M. Snoek
- Abstract summary: Generalized zero-shot learning recognizes inputs from both seen and unseen classes.
We propose a bias-aware learner to map inputs to a semantic embedding space for generalized zero-shot learning.
- Score: 47.09887661463657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized zero-shot learning recognizes inputs from both seen and unseen
classes. Yet, existing methods tend to be biased towards the classes seen
during training. In this paper, we strive to mitigate this bias. We propose a
bias-aware learner to map inputs to a semantic embedding space for generalized
zero-shot learning. During training, the model learns to regress to real-valued
class prototypes in the embedding space with temperature scaling, while a
margin-based bidirectional entropy term regularizes seen and unseen
probabilities. Relying on a real-valued semantic embedding space provides a
versatile approach, as the model can operate on different types of semantic
information for both seen and unseen classes. Experiments are carried out on
four benchmarks for generalized zero-shot learning and demonstrate the benefits
of the proposed bias-aware classifier, both as a stand-alone method or in
combination with generated features.
Related papers
- An Attention-based Framework for Fair Contrastive Learning [2.1605931466490795]
We propose a new method for fair contrastive learning that employs an attention mechanism to model bias-causing interactions.
Our attention mechanism avoids bias-causing samples that confound the model and focuses on bias-reducing samples that help learn semantically meaningful representations.
arXiv Detail & Related papers (2024-11-22T07:11:35Z) - Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - Beyond Prototypes: Semantic Anchor Regularization for Better
Representation Learning [82.29761875805369]
One of the ultimate goals of representation learning is to achieve compactness within a class and well-separability between classes.
We propose a novel perspective to use pre-defined class anchors serving as feature centroid to unidirectionally guide feature learning.
The proposed Semantic Anchor Regularization (SAR) can be used in a plug-and-play manner in the existing models.
arXiv Detail & Related papers (2023-12-19T05:52:38Z) - Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations [3.8980564330208662]
We propose a model that assumes that the attribute responsible for the shift is unknown in advance.
We show that our approach improves generalization on diverse class distributions in both simulations and real-world datasets.
arXiv Detail & Related papers (2023-11-30T14:14:31Z) - Towards the Generalization of Contrastive Self-Supervised Learning [11.889992921445849]
We present a theoretical explanation of how contrastive self-supervised pre-trained models generalize to downstream tasks.
We further explore SimCLR and Barlow Twins, which are two canonical contrastive self-supervised methods.
arXiv Detail & Related papers (2021-11-01T07:39:38Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z) - Entropy-Based Uncertainty Calibration for Generalized Zero-Shot Learning [49.04790688256481]
The goal of generalized zero-shot learning (GZSL) is to recognise both seen and unseen classes.
Most GZSL methods typically learn to synthesise visual representations from semantic information on the unseen classes.
We propose a novel framework that leverages dual variational autoencoders with a triplet loss to learn discriminative latent features.
arXiv Detail & Related papers (2021-01-09T05:21:27Z) - Null It Out: Guarding Protected Attributes by Iterative Nullspace
Projection [51.041763676948705]
Iterative Null-space Projection (INLP) is a novel method for removing information from neural representations.
We show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.
arXiv Detail & Related papers (2020-04-16T14:02:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.