Proposal Distribution Calibration for Few-Shot Object Detection
- URL: http://arxiv.org/abs/2212.07618v1
- Date: Thu, 15 Dec 2022 05:09:11 GMT
- Title: Proposal Distribution Calibration for Few-Shot Object Detection
- Authors: Bohao Li, Chang Liu, Mengnan Shi, Xiaozhong Chen, Xiangyang Ji,
Qixiang Ye
- Abstract summary: In few-shot object detection (FSOD), the two-step training paradigm is widely adopted to mitigate the severe sample imbalance.
Unfortunately, the extreme data scarcity aggravates the proposal distribution bias, hindering the RoI head from evolving toward novel classes.
We introduce a simple yet effective proposal distribution calibration (PDC) approach to neatly enhance the localization and classification abilities of the RoI head.
- Score: 65.19808035019031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adapting object detectors learned with sufficient supervision to novel
classes under low data regimes is charming yet challenging. In few-shot object
detection (FSOD), the two-step training paradigm is widely adopted to mitigate
the severe sample imbalance, i.e., holistic pre-training on base classes, then
partial fine-tuning in a balanced setting with all classes. Since unlabeled
instances are suppressed as backgrounds in the base training phase, the learned
RPN is prone to produce biased proposals for novel instances, resulting in
dramatic performance degradation. Unfortunately, the extreme data scarcity
aggravates the proposal distribution bias, hindering the RoI head from evolving
toward novel classes. In this paper, we introduce a simple yet effective
proposal distribution calibration (PDC) approach to neatly enhance the
localization and classification abilities of the RoI head by recycling its
localization ability endowed in base training and enriching high-quality
positive samples for semantic fine-tuning. Specifically, we sample proposals
based on the base proposal statistics to calibrate the distribution bias and
impose additional localization and classification losses upon the sampled
proposals for fast expanding the base detector to novel classes. Experiments on
the commonly used Pascal VOC and MS COCO datasets with explicit
state-of-the-art performances justify the efficacy of our PDC for FSOD. Code is
available at github.com/Bohao-Lee/PDC.
Related papers
- GRSDet: Learning to Generate Local Reverse Samples for Few-shot Object
Detection [15.998148904793426]
Few-shot object detection (FSOD) aims to achieve object detection only using a few novel class training data.
Most of the existing methods usually adopt a transfer-learning strategy to construct the novel class distribution.
We propose generating local reverse samples (LRSamples) in Prototype Reference Frames to adaptively adjust the center position and boundary range of the novel class distribution.
arXiv Detail & Related papers (2023-12-27T13:36:29Z) - Sparse is Enough in Fine-tuning Pre-trained Large Language Models [98.46493578509039]
We propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT)
We validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning.
arXiv Detail & Related papers (2023-12-19T06:06:30Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Learnable Distribution Calibration for Few-Shot Class-Incremental
Learning [122.2241120474278]
Few-shot class-incremental learning (FSCIL) faces challenges of memorizing old class distributions and estimating new class distributions given few training samples.
We propose a learnable distribution calibration (LDC) approach, with the aim to systematically solve these two challenges using a unified framework.
arXiv Detail & Related papers (2022-10-01T09:40:26Z) - Adaptive Dimension Reduction and Variational Inference for Transductive
Few-Shot Classification [2.922007656878633]
We propose a new clustering method based on Variational Bayesian inference, further improved by Adaptive Dimension Reduction.
Our proposed method significantly improves accuracy in the realistic unbalanced transductive setting on various Few-Shot benchmarks.
arXiv Detail & Related papers (2022-09-18T10:29:02Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Open-Sampling: Exploring Out-of-Distribution data for Re-balancing
Long-tailed datasets [24.551465814633325]
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
Recent studies found that directly training with out-of-distribution data in a semi-supervised manner would harm the generalization performance.
We propose a novel method called Open-sampling, which utilizes open-set noisy labels to re-balance the class priors of the training dataset.
arXiv Detail & Related papers (2022-06-17T14:29:52Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.