Learning from Pseudo-labeled Segmentation for Multi-Class Object
Counting
- URL: http://arxiv.org/abs/2307.07677v1
- Date: Sat, 15 Jul 2023 01:33:19 GMT
- Title: Learning from Pseudo-labeled Segmentation for Multi-Class Object
Counting
- Authors: Jingyi Xu and Hieu Le and Dimitris Samaras
- Abstract summary: Class-agnostic counting (CAC) has numerous potential applications across various domains.
The goal is to count objects of an arbitrary category during testing, based on only a few annotated exemplars.
We show that the segmentation model trained on these pseudo-labeled masks can effectively localize objects of interest for an arbitrary multi-class image.
- Score: 35.652092907690694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-agnostic counting (CAC) has numerous potential applications across
various domains. The goal is to count objects of an arbitrary category during
testing, based on only a few annotated exemplars. In this paper, we point out
that the task of counting objects of interest when there are multiple object
classes in the image (namely, multi-class object counting) is particularly
challenging for current object counting models. They often greedily count every
object regardless of the exemplars. To address this issue, we propose
localizing the area containing the objects of interest via an exemplar-based
segmentation model before counting them. The key challenge here is the lack of
segmentation supervision to train this model. To this end, we propose a method
to obtain pseudo segmentation masks using only box exemplars and dot
annotations. We show that the segmentation model trained on these
pseudo-labeled masks can effectively localize objects of interest for an
arbitrary multi-class image based on the exemplars. To evaluate the performance
of different methods on multi-class counting, we introduce two new benchmarks,
a synthetic multi-class dataset and a new test set of real images in which
objects from multiple classes are present. Our proposed method shows a
significant advantage over the previous CAC methods on these two benchmarks.
Related papers
- Learning from Exemplars for Interactive Image Segmentation [15.37506525730218]
We introduce novel interactive segmentation frameworks for both a single object and multiple objects in the same category.
Our model reduces users' labor by around 15%, requiring two fewer clicks to achieve target IoUs 85% and 90%.
arXiv Detail & Related papers (2024-06-17T12:38:01Z) - Zero-Shot Object Counting with Language-Vision Models [50.1159882903028]
Class-agnostic object counting aims to count object instances of an arbitrary class at test time.
Current methods require human-annotated exemplars as inputs which are often unavailable for novel categories.
We propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time.
arXiv Detail & Related papers (2023-09-22T14:48:42Z) - Universal Instance Perception as Object Discovery and Retrieval [90.96031157557806]
UNI reformulates diverse instance perception tasks into a unified object discovery and retrieval paradigm.
It can flexibly perceive different types of objects by simply changing the input prompts.
UNI shows superior performance on 20 challenging benchmarks from 10 instance-level tasks.
arXiv Detail & Related papers (2023-03-12T14:28:24Z) - Iterative Learning for Instance Segmentation [0.0]
State-of-the-art deep neural network models require large amounts of labeled data in order to perform well in this task.
We propose for the first time, an iterative learning and annotation method that is able to detect, segment and annotate instances in datasets composed of multiple similar objects.
Experiments on two different datasets show the validity of the approach in different applications related to visual inspection.
arXiv Detail & Related papers (2022-02-18T10:25:02Z) - Dilated-Scale-Aware Attention ConvNet For Multi-Class Object Counting [18.733301622920102]
Multi-class object counting expands the scope of application of object counting task.
The multi-target detection task can achieve multi-class object counting in some scenarios.
We propose a simple yet efficient counting network based on point-level annotations.
arXiv Detail & Related papers (2020-12-15T08:38:28Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z) - Rethinking Object Detection in Retail Stores [55.359582952686175]
We propose a new task, simultaneously object localization and counting, abbreviated as Locount.
Locount requires algorithms to localize groups of objects of interest with the number of instances.
We collect a large-scale object localization and counting dataset with rich annotations in retail stores.
arXiv Detail & Related papers (2020-03-18T14:01:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.