Learning to Detect Every Thing in an Open World
- URL: http://arxiv.org/abs/2112.01698v1
- Date: Fri, 3 Dec 2021 03:56:06 GMT
- Title: Learning to Detect Every Thing in an Open World
- Authors: Kuniaki Saito, Ping Hu, Trevor Darrell, Kate Saenko
- Abstract summary: We propose a simple yet surprisingly powerful data augmentation and training scheme we call Learning to Detect Every Thing (LDET)
To avoid suppressing hidden objects, background objects that are visible but unlabeled, we paste annotated objects on a background image sampled from a small region of the original image.
LDET leads to significant improvements on many datasets in the open world instance segmentation task.
- Score: 139.78830329914135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many open-world applications require the detection of novel objects, yet
state-of-the-art object detection and instance segmentation networks do not
excel at this task. The key issue lies in their assumption that regions without
any annotations should be suppressed as negatives, which teaches the model to
treat the unannotated objects as background. To address this issue, we propose
a simple yet surprisingly powerful data augmentation and training scheme we
call Learning to Detect Every Thing (LDET). To avoid suppressing hidden
objects, background objects that are visible but unlabeled, we paste annotated
objects on a background image sampled from a small region of the original
image. Since training solely on such synthetically augmented images suffers
from domain shift, we decouple the training into two parts: 1) training the
region classification and regression head on augmented images, and 2) training
the mask heads on original images. In this way, a model does not learn to
classify hidden objects as background while generalizing well to real images.
LDET leads to significant improvements on many datasets in the open world
instance segmentation task, outperforming baselines on cross-category
generalization on COCO, as well as cross-dataset evaluation on UVO and
Cityscapes.
Related papers
- PEEKABOO: Hiding parts of an image for unsupervised object localization [7.161489957025654]
Localizing objects in an unsupervised manner poses significant challenges due to the absence of key visual information.
We propose a single-stage learning framework, dubbed PEEKABOO, for unsupervised object localization.
The key idea is to selectively hide parts of an image and leverage the remaining image information to infer the location of objects without explicit supervision.
arXiv Detail & Related papers (2024-07-24T20:35:20Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Unsupervised Object Localization: Observing the Background to Discover
Objects [4.870509580034194]
In this work, we take a different approach and propose to look for the background instead.
This way, the salient objects emerge as a by-product without any strong assumption on what an object should be.
We propose FOUND, a simple model made of a single $conv1times1$ with coarse background masks extracted from self-supervised patch-based representations.
arXiv Detail & Related papers (2022-12-15T13:43:11Z) - Object-Aware Cropping for Self-Supervised Learning [21.79324121283122]
We show that self-supervised learning based on the usual random cropping performs poorly on such datasets.
We propose replacing one or both of the random crops with crops obtained from an object proposal algorithm.
Using this approach, which we call object-aware cropping, results in significant improvements over scene cropping on classification and object detection benchmarks.
arXiv Detail & Related papers (2021-12-01T07:23:37Z) - Rectifying the Shortcut Learning of Background: Shared Object
Concentration for Few-Shot Image Recognition [101.59989523028264]
Few-Shot image classification aims to utilize pretrained knowledge learned from a large-scale dataset to tackle a series of downstream classification tasks.
We propose COSOC, a novel Few-Shot Learning framework, to automatically figure out foreground objects at both pretraining and evaluation stage.
arXiv Detail & Related papers (2021-07-16T07:46:41Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - A Simple and Effective Use of Object-Centric Images for Long-Tailed
Object Detection [56.82077636126353]
We take advantage of object-centric images to improve object detection in scene-centric images.
We present a simple yet surprisingly effective framework to do so.
Our approach can improve the object detection (and instance segmentation) accuracy of rare objects by 50% (and 33%) relatively.
arXiv Detail & Related papers (2021-02-17T17:27:21Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.