CoDo: Contrastive Learning with Downstream Background Invariance for
Detection
- URL: http://arxiv.org/abs/2205.04617v1
- Date: Tue, 10 May 2022 01:26:15 GMT
- Title: CoDo: Contrastive Learning with Downstream Background Invariance for
Detection
- Authors: Bing Zhao, Jun Li and Hong Zhu
- Abstract summary: We propose a novel object-level self-supervised learning method, called Contrastive learning with Downstream background invariance (CoDo)
The pretext task is converted to focus on instance location modeling for various backgrounds, especially for downstream datasets.
Experiments on MSCOCO demonstrate that the proposed CoDo with common backbones, ResNet50-FPN, yields strong transfer learning results for object detection.
- Score: 10.608660802917214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prior self-supervised learning researches mainly select image-level
instance discrimination as pretext task. It achieves a fantastic classification
performance that is comparable to supervised learning methods. However, with
degraded transfer performance on downstream tasks such as object detection. To
bridge the performance gap, we propose a novel object-level self-supervised
learning method, called Contrastive learning with Downstream background
invariance (CoDo). The pretext task is converted to focus on instance location
modeling for various backgrounds, especially for downstream datasets. The
ability of background invariance is considered vital for object detection.
Firstly, a data augmentation strategy is proposed to paste the instances onto
background images, and then jitter the bounding box to involve background
information. Secondly, we implement architecture alignment between our
pretraining network and the mainstream detection pipelines. Thirdly,
hierarchical and multi views contrastive learning is designed to improve
performance of visual representation learning. Experiments on MSCOCO
demonstrate that the proposed CoDo with common backbones, ResNet50-FPN, yields
strong transfer learning results for object detection.
Related papers
- Proposal-Contrastive Pretraining for Object Detection from Fewer Data [11.416621957617334]
We present Proposal Selection Contrast (ProSeCo), a novel unsupervised overall pretraining approach.
ProSeCo uses the large number of object proposals generated by the detector for contrastive learning.
We show that our method outperforms state of the art in unsupervised pretraining for object detection on standard and novel benchmarks.
arXiv Detail & Related papers (2023-10-25T17:59:26Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Experience feedback using Representation Learning for Few-Shot Object
Detection on Aerial Images [2.8560476609689185]
The performance of our method is assessed on DOTA, a large-scale remote sensing images dataset.
It highlights in particular some intrinsic weaknesses for the few-shot object detection task.
arXiv Detail & Related papers (2021-09-27T13:04:53Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Aligning Pretraining for Detection via Object-Level Contrastive Learning [57.845286545603415]
Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning.
We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task.
Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection.
arXiv Detail & Related papers (2021-06-04T17:59:52Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.