Improving Few-shot Learning with Weakly-supervised Object Localization
- URL: http://arxiv.org/abs/2105.11715v1
- Date: Tue, 25 May 2021 07:39:32 GMT
- Title: Improving Few-shot Learning with Weakly-supervised Object Localization
- Authors: Inyong Koo, Minki Jeong, Changick Kim
- Abstract summary: We propose a novel framework that generates class representations by extracting features from class-relevant regions of the images.
Our method outperforms the baseline few-shot model in miniImageNet and tieredImageNet benchmarks.
- Score: 24.3569501375842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning often involves metric learning-based classifiers, which
predict the image label by comparing the distance between the extracted feature
vector and class representations. However, applying global pooling in the
backend of the feature extractor may not produce an embedding that correctly
focuses on the class object. In this work, we propose a novel framework that
generates class representations by extracting features from class-relevant
regions of the images. Given only a few exemplary images with image-level
labels, our framework first localizes the class objects by spatially
decomposing the similarity between the images and their class prototypes. Then,
enhanced class representations are achieved from the localization results. We
also propose a loss function to enhance distinctions of the refined features.
Our method outperforms the baseline few-shot model in miniImageNet and
tieredImageNet benchmarks.
Related papers
- What's in a Name? Beyond Class Indices for Image Recognition [28.02490526407716]
We propose a vision-language model with assigning class names to images given only a large (essentially unconstrained) vocabulary of categories as prior information.
We leverage non-parametric methods to establish meaningful relationships between images, allowing the model to automatically narrow down the pool of candidate names.
Our method leads to a roughly 50% improvement over the baseline on ImageNet in the unsupervised setting.
arXiv Detail & Related papers (2023-04-05T11:01:23Z) - Attribute Prototype Network for Any-Shot Learning [113.50220968583353]
We argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot and few-shot, image classification tasks.
We propose a novel representation learning framework that jointly learns global and local features using only class-level attributes.
arXiv Detail & Related papers (2022-04-04T02:25:40Z) - Matching Feature Sets for Few-Shot Image Classification [22.84472344406448]
We argue that a set-based representation intrinsically builds a richer representation of images from the base classes.
Our approach, dubbed SetFeat, embeds shallow self-attention mechanisms inside existing encoder architectures.
arXiv Detail & Related papers (2022-04-02T22:42:54Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - Prototypical Region Proposal Networks for Few-Shot Localization and
Classification [1.5100087942838936]
We develop a framework to unifysegmentation and classification into an end-to-end classification model -- PRoPnet.
We empirically demonstrate that our methods improve accuracy on image datasets with natural scenes containing multiple object classes.
arXiv Detail & Related papers (2021-04-08T04:03:30Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Saliency-driven Class Impressions for Feature Visualization of Deep
Neural Networks [55.11806035788036]
It is advantageous to visualize the features considered to be essential for classification.
Existing visualization methods develop high confidence images consisting of both background and foreground features.
In this work, we propose a saliency-driven approach to visualize discriminative features that are considered most important for a given task.
arXiv Detail & Related papers (2020-07-31T06:11:06Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z) - One-Shot Image Classification by Learning to Restore Prototypes [11.448423413463916]
One-shot image classification aims to train image classifiers over the dataset with only one image per category.
For one-shot learning, the existing metric learning approaches would suffer poor performance because the single training image may not be representative of the class.
We propose a simple yet effective regression model, denoted by RestoreNet, which learns a class transformation on the image feature to move the image closer to the class center in the feature space.
arXiv Detail & Related papers (2020-05-04T02:11:30Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z) - Weakly-supervised Object Localization for Few-shot Learning and
Fine-grained Few-shot Learning [0.5156484100374058]
Few-shot learning aims to learn novel visual categories from very few samples.
We propose a Self-Attention Based Complementary Module (SAC Module) to fulfill the weakly-supervised object localization.
We also produce the activated masks for selecting discriminative deep descriptors for few-shot classification.
arXiv Detail & Related papers (2020-03-02T14:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.