Online Bag-of-Visual-Words Generation for Unsupervised Representation
Learning
- URL: http://arxiv.org/abs/2012.11552v1
- Date: Mon, 21 Dec 2020 18:31:21 GMT
- Title: Online Bag-of-Visual-Words Generation for Unsupervised Representation
Learning
- Authors: Spyros Gidaris, Andrei Bursuc, Gilles Puy, Nikos Komodakis, Matthieu
Cord, Patrick P\'erez
- Abstract summary: We propose a teacher-student scheme to learn representations by training a convnet to reconstruct a bag-of-visual-words (BoW) representation of an image.
Our strategy performs an online training of both the teacher network (whose role is to generate the BoW targets) and the student network (whose role is to learn representations) along with an online update of the visual-words vocabulary.
- Score: 59.29452780994169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning image representations without human supervision is an important and
active research field. Several recent approaches have successfully leveraged
the idea of making such a representation invariant under different types of
perturbations, especially via contrastive-based instance discrimination
training. Although effective visual representations should indeed exhibit such
invariances, there are other important characteristics, such as encoding
contextual reasoning skills, for which alternative reconstruction-based
approaches might be better suited.
With this in mind, we propose a teacher-student scheme to learn
representations by training a convnet to reconstruct a bag-of-visual-words
(BoW) representation of an image, given as input a perturbed version of that
same image. Our strategy performs an online training of both the teacher
network (whose role is to generate the BoW targets) and the student network
(whose role is to learn representations), along with an online update of the
visual-words vocabulary (used for the BoW targets). This idea effectively
enables fully online BoW-guided unsupervised learning. Extensive experiments
demonstrate the interest of our BoW-based strategy which surpasses previous
state-of-the-art methods (including contrastive-based ones) in several
applications. For instance, in downstream tasks such Pascal object detection,
Pascal classification and Places205 classification, our method improves over
all prior unsupervised approaches, thus establishing new state-of-the-art
results that are also significantly better even than those of supervised
pre-training. We provide the implementation code at
https://github.com/valeoai/obow.
Related papers
- Intra-task Mutual Attention based Vision Transformer for Few-Shot Learning [12.5354658533836]
Humans possess remarkable ability to accurately classify new, unseen images after being exposed to only a few examples.
For artificial neural network models, determining the most relevant features for distinguishing between two images with limited samples presents a challenge.
We propose an intra-task mutual attention method for few-shot learning, that involves splitting the support and query samples into patches.
arXiv Detail & Related papers (2024-05-06T02:02:57Z) - Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models [52.3032592038514]
We propose a class-aware text prompt to enrich generated prompts with label-related image information.
We achieve an average improvement of 4.03% on new classes and 3.19% on harmonic-mean over eleven classification benchmarks.
arXiv Detail & Related papers (2023-03-30T06:02:40Z) - Vision Learners Meet Web Image-Text Pairs [32.36188289972377]
In this work, we consider self-supervised pre-training on noisy web sourced image-text paired data.
We compare a range of methods, including single-modal ones that use masked training objectives and multi-modal ones that use image-text constrastive training.
We present a new visual representation pre-training method, MUlti-modal Generator(MUG), that learns from scalable web sourced image-text data.
arXiv Detail & Related papers (2023-01-17T18:53:24Z) - Few-Shot Object Detection by Knowledge Distillation Using
Bag-of-Visual-Words Representations [58.48995335728938]
We design a novel knowledge distillation framework to guide the learning of the object detector.
We first present a novel Position-Aware Bag-of-Visual-Words model for learning a representative bag of visual words.
We then perform knowledge distillation based on the fact that an image should have consistent BoVW representations in two different feature spaces.
arXiv Detail & Related papers (2022-07-25T10:40:40Z) - A Simple Long-Tailed Recognition Baseline via Vision-Language Model [92.2866546058082]
The visual world naturally exhibits a long-tailed distribution of open classes, which poses great challenges to modern visual systems.
Recent advances in contrastive visual-language pretraining shed light on a new pathway for visual recognition.
We propose BALLAD to leverage contrastive vision-language models for long-tailed recognition.
arXiv Detail & Related papers (2021-11-29T17:49:24Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z) - Learning Representations by Predicting Bags of Visual Words [55.332200948110895]
Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data.
Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions.
arXiv Detail & Related papers (2020-02-27T16:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.