Multi-Faceted Distillation of Base-Novel Commonality for Few-shot Object
Detection
- URL: http://arxiv.org/abs/2207.11184v1
- Date: Fri, 22 Jul 2022 16:46:51 GMT
- Title: Multi-Faceted Distillation of Base-Novel Commonality for Few-shot Object
Detection
- Authors: Shuang Wu, Wenjie Pei, Dianwen Mei, Fanglin Chen, Jiandong Tian,
Guangming Lu
- Abstract summary: We learn three types of class-agnostic commonalities between base and novel classes explicitly.
Our method can be readily integrated into most of existing fine-tuning based methods and consistently improve the performance by a large margin.
- Score: 58.48995335728938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of existing methods for few-shot object detection follow the fine-tuning
paradigm, which potentially assumes that the class-agnostic generalizable
knowledge can be learned and transferred implicitly from base classes with
abundant samples to novel classes with limited samples via such a two-stage
training strategy. However, it is not necessarily true since the object
detector can hardly distinguish between class-agnostic knowledge and
class-specific knowledge automatically without explicit modeling. In this work
we propose to learn three types of class-agnostic commonalities between base
and novel classes explicitly: recognition-related semantic commonalities,
localization-related semantic commonalities and distribution commonalities. We
design a unified distillation framework based on a memory bank, which is able
to perform distillation of all three types of commonalities jointly and
efficiently. Extensive experiments demonstrate that our method can be readily
integrated into most of existing fine-tuning based methods and consistently
improve the performance by a large margin.
Related papers
- Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Knowledge Distillation from Single to Multi Labels: an Empirical Study [14.12487391004319]
We introduce a novel distillation method based on Class Activation Maps (CAMs)
Our findings indicate that the logit-based method is not well-suited for multi-label classification.
We propose that a suitable dark knowledge should incorporate class-wise information and be highly correlated with the final classification results.
arXiv Detail & Related papers (2023-03-15T04:39:01Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Multi-Modal Few-Shot Object Detection with Meta-Learning-Based
Cross-Modal Prompting [77.69172089359606]
We study multi-modal few-shot object detection (FSOD) in this paper, using both few-shot visual examples and class semantic information for detection.
Our approach is motivated by the high-level conceptual similarity of (metric-based) meta-learning and prompt-based learning.
We comprehensively evaluate the proposed multi-modal FSOD models on multiple few-shot object detection benchmarks, achieving promising results.
arXiv Detail & Related papers (2022-04-16T16:45:06Z) - A Similarity-based Framework for Classification Task [21.182406977328267]
Similarity-based method gives rise to a new class of methods for multi-label learning and also achieves promising performance.
We unite similarity-based learning and generalized linear models to achieve the best of both worlds.
arXiv Detail & Related papers (2022-03-05T06:39:50Z) - Dual Path Structural Contrastive Embeddings for Learning Novel Objects [6.979491536753043]
Recent research shows that gaining information on a good feature space can be an effective solution to achieve favorable performance on few-shot tasks.
We propose a simple but effective paradigm that decouples the tasks of learning feature representations and classifiers.
Our method can still achieve promising results for both standard and generalized few-shot problems in either an inductive or transductive inference setting.
arXiv Detail & Related papers (2021-12-23T04:43:31Z) - Open-Set Representation Learning through Combinatorial Embedding [62.05670732352456]
We are interested in identifying novel concepts in a dataset through representation learning based on the examples in both labeled and unlabeled classes.
We propose a learning approach, which naturally clusters examples in unseen classes using the compositional knowledge given by multiple supervised meta-classifiers on heterogeneous label spaces.
The proposed algorithm discovers novel concepts via a joint optimization of enhancing the discrimitiveness of unseen classes as well as learning the representations of known classes generalizable to novel ones.
arXiv Detail & Related papers (2021-06-29T11:51:57Z) - Cooperative Bi-path Metric for Few-shot Learning [50.98891758059389]
We make two contributions to investigate the few-shot classification problem.
We report a simple and effective baseline trained on base classes in the way of traditional supervised learning.
We propose a cooperative bi-path metric for classification, which leverages the correlations between base classes and novel classes to further improve the accuracy.
arXiv Detail & Related papers (2020-08-10T11:28:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.