Rethinking Class Relations: Absolute-relative Supervised and
Unsupervised Few-shot Learning
- URL: http://arxiv.org/abs/2001.03919v4
- Date: Wed, 9 Jun 2021 04:36:13 GMT
- Title: Rethinking Class Relations: Absolute-relative Supervised and
Unsupervised Few-shot Learning
- Authors: Hongguang Zhang, Piotr Koniusz, Songlei Jian, Hongdong Li, Philip H.
S. Torr
- Abstract summary: We study the fundamental problem of simplistic class modeling in current few-shot learning methods.
We propose a novel Absolute-relative Learning paradigm to fully take advantage of label information to refine the image representations.
- Score: 157.62595449130973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The majority of existing few-shot learning methods describe image relations
with binary labels. However, such binary relations are insufficient to teach
the network complicated real-world relations, due to the lack of decision
smoothness. Furthermore, current few-shot learning models capture only the
similarity via relation labels, but they are not exposed to class concepts
associated with objects, which is likely detrimental to the classification
performance due to underutilization of the available class labels. To
paraphrase, children learn the concept of tiger from a few of actual examples
as well as from comparisons of tiger to other animals. Thus, we hypothesize
that in fact both similarity and class concept learning must be occurring
simultaneously. With these observations at hand, we study the fundamental
problem of simplistic class modeling in current few-shot learning methods. We
rethink the relations between class concepts, and propose a novel
Absolute-relative Learning paradigm to fully take advantage of label
information to refine the image representations and correct the relation
understanding in both supervised and unsupervised scenarios. Our proposed
paradigm improves the performance of several the state-of-the-art models on
publicly available datasets.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - VGSE: Visually-Grounded Semantic Embeddings for Zero-Shot Learning [113.50220968583353]
We propose to discover semantic embeddings containing discriminative visual properties for zero-shot learning.
Our model visually divides a set of images from seen classes into clusters of local image regions according to their visual similarity.
We demonstrate that our visually-grounded semantic embeddings further improve performance over word embeddings across various ZSL models by a large margin.
arXiv Detail & Related papers (2022-03-20T03:49:02Z) - Semantic-Based Few-Shot Learning by Interactive Psychometric Testing [14.939767383180786]
Few-shot classification tasks aim to classify images in query sets based on only a few labeled examples in support sets.
In this work, we advanced the few-shot learning towards this more challenging scenario, the semantic-based few-shot learning.
We propose a method to address the paradigm by capturing the inner semantic relationships using interactive psychometric learning.
arXiv Detail & Related papers (2021-12-16T21:03:09Z) - Exploiting Class Similarity for Machine Learning with Confidence Labels
and Projective Loss Functions [0.0]
Class labels are relatable to each other, with certain class labels being more similar to each other than others.
Current labeling techniques fail to explicitly capture such similarity information.
We use our approach to train neural networks with noisy labels, as we believe noisy labels are partly a result of confusability arising from class similarity.
arXiv Detail & Related papers (2021-03-25T04:49:44Z) - Logic-guided Semantic Representation Learning for Zero-Shot Relation
Classification [31.887770824130957]
We propose a novel logic-guided semantic representation learning model for zero-shot relation classification.
Our approach builds connections between seen and unseen relations via implicit and explicit semantic representations with knowledge graph embeddings and logic rules.
arXiv Detail & Related papers (2020-10-30T04:30:09Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.