Learning a Unified Sample Weighting Network for Object Detection
- URL: http://arxiv.org/abs/2006.06568v2
- Date: Sun, 14 Jun 2020 05:30:43 GMT
- Title: Learning a Unified Sample Weighting Network for Object Detection
- Authors: Qi Cai and Yingwei Pan and Yu Wang and Jingen Liu and Ting Yao and Tao
Mei
- Abstract summary: Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
- Score: 113.98404690619982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Region sampling or weighting is significantly important to the success of
modern region-based object detectors. Unlike some previous works, which only
focus on "hard" samples when optimizing the objective function, we argue that
sample weighting should be data-dependent and task-dependent. The importance of
a sample for the objective function optimization is determined by its
uncertainties to both object classification and bounding box regression tasks.
To this end, we devise a general loss function to cover most region-based
object detectors with various sampling strategies, and then based on it we
propose a unified sample weighting network to predict a sample's task weights.
Our framework is simple yet effective. It leverages the samples' uncertainty
distributions on classification loss, regression loss, IoU, and probability
score, to predict sample weights. Our approach has several advantages: (i). It
jointly learns sample weights for both classification and regression tasks,
which differentiates it from most previous work. (ii). It is a data-driven
process, so it avoids some manual parameter tuning. (iii). It can be
effortlessly plugged into most object detectors and achieves noticeable
performance improvements without affecting their inference time. Our approach
has been thoroughly evaluated with recent object detection frameworks and it
can consistently boost the detection accuracy. Code has been made available at
\url{https://github.com/caiqi/sample-weighting-network}.
Related papers
- Robust compressive tracking via online weighted multiple instance learning [0.6813925418351435]
We propose a visual object tracking algorithm by integrating a coarse-to-fine search strategy based on sparse representation and the weighted multiple instance learning (WMIL) algorithm.
Compared with the other trackers, our approach has more information of the original signal with less complexity due to the coarse-to-fine search method, and also has weights for important samples.
arXiv Detail & Related papers (2024-06-14T10:48:17Z) - Better Sampling, towards Better End-to-end Small Object Detection [7.7473020808686694]
Small object detection remains unsatisfactory due to limited characteristics and high density and mutual overlap.
We propose methods enhancing sampling within an end-to-end framework.
Our model demonstrates a significant enhancement, achieving a 2.9% increase in average precision (AP) over the state-of-the-art (SOTA) on the VisDrone dataset.
arXiv Detail & Related papers (2024-05-17T04:37:44Z) - Sample Weight Estimation Using Meta-Updates for Online Continual
Learning [7.832189413179361]
Online Meta-learning for Sample Importance (OMSI) strategy approximates sample weights for a mini-batch in an online CL stream.
OMSI enhances both learning and retained accuracy in a controlled noisy-labeled data stream.
arXiv Detail & Related papers (2024-01-29T09:04:45Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Meta-Sampler: Almost-Universal yet Task-Oriented Sampling for Point
Clouds [46.33828400918886]
We show how we can train an almost-universal meta-sampler across multiple tasks.
This meta-sampler can then be rapidly fine-tuned when applied to different datasets, networks, or even different tasks.
arXiv Detail & Related papers (2022-03-30T02:21:34Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - Probabilistic Anchor Assignment with IoU Prediction for Object Detection [9.703212439661097]
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can significantly affect a model's performance.
We propose a novel anchor assignment strategy that adaptively separates anchors into positive and negative samples for a ground truth bounding box according to the model's learning status.
arXiv Detail & Related papers (2020-07-16T04:26:57Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - AutoAssign: Differentiable Label Assignment for Dense Object Detection [94.24431503373884]
Auto COCO is an anchor-free detector for object detection.
It achieves appearance-aware through a fully differentiable weighting mechanism.
Our best model achieves 52.1% AP, outperforming all existing one-stage detectors.
arXiv Detail & Related papers (2020-07-07T14:32:21Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.