Zero-shot Object Counting
- URL: http://arxiv.org/abs/2303.02001v2
- Date: Mon, 24 Apr 2023 15:51:01 GMT
- Title: Zero-shot Object Counting
- Authors: Jingyi Xu, Hieu Le, Vu Nguyen, Viresh Ranjan, and Dimitris Samaras
- Abstract summary: Class-agnostic object counting aims to count object instances of an arbitrary class at test time.
Current methods require human-annotated exemplars as inputs which are often unavailable for novel categories.
We propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time.
- Score: 31.192588671258775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class-agnostic object counting aims to count object instances of an arbitrary
class at test time. It is challenging but also enables many potential
applications. Current methods require human-annotated exemplars as inputs which
are often unavailable for novel categories, especially for autonomous systems.
Thus, we propose zero-shot object counting (ZSC), a new setting where only the
class name is available during test time. Such a counting system does not
require human annotators in the loop and can operate automatically. Starting
from a class name, we propose a method that can accurately identify the optimal
patches which can then be used as counting exemplars. Specifically, we first
construct a class prototype to select the patches that are likely to contain
the objects of interest, namely class-relevant patches. Furthermore, we
introduce a model that can quantitatively measure how suitable an arbitrary
patch is as a counting exemplar. By applying this model to all the candidate
patches, we can select the most suitable patches as exemplars for counting.
Experimental results on a recent class-agnostic counting dataset, FSC-147,
validate the effectiveness of our method. Code is available at
https://github.com/cvlab-stonybrook/zero-shot-counting
Related papers
- Zero-Shot Object Counting with Language-Vision Models [50.1159882903028]
Class-agnostic object counting aims to count object instances of an arbitrary class at test time.
Current methods require human-annotated exemplars as inputs which are often unavailable for novel categories.
We propose zero-shot object counting (ZSC), a new setting where only the class name is available during test time.
arXiv Detail & Related papers (2023-09-22T14:48:42Z) - Mitigating Word Bias in Zero-shot Prompt-based Classifiers [55.60306377044225]
We show that matching class priors correlates strongly with the oracle upper bound performance.
We also demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.
arXiv Detail & Related papers (2023-09-10T10:57:41Z) - Learning from Pseudo-labeled Segmentation for Multi-Class Object
Counting [35.652092907690694]
Class-agnostic counting (CAC) has numerous potential applications across various domains.
The goal is to count objects of an arbitrary category during testing, based on only a few annotated exemplars.
We show that the segmentation model trained on these pseudo-labeled masks can effectively localize objects of interest for an arbitrary multi-class image.
arXiv Detail & Related papers (2023-07-15T01:33:19Z) - Exemplar Free Class Agnostic Counting [28.41525571128706]
Class agnostic counting aims to count objects in a novel object category at test time without access to labeled training data for that category.
Our proposed approach first identifies exemplars from repeating objects in an image, and then counts the repeating objects.
We evaluate our proposed approach on FSC-147 dataset, and show that it achieves superior performance compared to the existing approaches.
arXiv Detail & Related papers (2022-05-27T19:44:39Z) - UnseenNet: Fast Training Detector for Any Unseen Concept [6.802401545890963]
"Unseen Class Detector" can be trained within a very short time for any possible unseen class without bounding boxes with competitive accuracy.
Our model (UnseenNet) is trained on the ImageNet classification dataset for unseen classes and tested on an object detection dataset (OpenImages)
arXiv Detail & Related papers (2022-03-16T17:17:10Z) - Few-shot Learning for Unsupervised Feature Selection [59.75321498170363]
We propose a few-shot learning method for unsupervised feature selection.
The proposed method can select a subset of relevant features in a target task given a few unlabeled target instances.
We experimentally demonstrate that the proposed method outperforms existing feature selection methods.
arXiv Detail & Related papers (2021-07-02T03:52:51Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Overcoming Statistical Shortcuts for Open-ended Visual Counting [54.858754825838865]
We aim to develop models that learn a proper mechanism of counting regardless of the output label.
First, we propose the Modifying Count Distribution protocol, which penalizes models that over-rely on statistical shortcuts.
Secondly, we introduce the Spatial Counting Network (SCN), which is dedicated to visual analysis and counting based on natural language questions.
arXiv Detail & Related papers (2020-06-17T18:02:01Z) - Any-Shot Object Detection [81.88153407655334]
'Any-shot detection' is where totally unseen and few-shot categories can simultaneously co-occur during inference.
We propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes.
Our framework can also be used solely for Zero-shot detection and Few-shot detection tasks.
arXiv Detail & Related papers (2020-03-16T03:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.