Training-free Object Counting with Prompts
- URL: http://arxiv.org/abs/2307.00038v2
- Date: Wed, 30 Aug 2023 03:04:40 GMT
- Title: Training-free Object Counting with Prompts
- Authors: Zenglin Shi, Ying Sun, Mengmi Zhang
- Abstract summary: Existing approaches rely on extensive training data with point annotations for each object.
We propose a training-free object counter that treats the counting task as a segmentation problem.
- Score: 12.358565655046977
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper tackles the problem of object counting in images. Existing
approaches rely on extensive training data with point annotations for each
object, making data collection labor-intensive and time-consuming. To overcome
this, we propose a training-free object counter that treats the counting task
as a segmentation problem. Our approach leverages the Segment Anything Model
(SAM), known for its high-quality masks and zero-shot segmentation capability.
However, the vanilla mask generation method of SAM lacks class-specific
information in the masks, resulting in inferior counting accuracy. To overcome
this limitation, we introduce a prior-guided mask generation method that
incorporates three types of priors into the segmentation process, enhancing
efficiency and accuracy. Additionally, we tackle the issue of counting objects
specified through text by proposing a two-stage approach that combines
reference object selection and prior-guided mask generation. Extensive
experiments on standard datasets demonstrate the competitive performance of our
training-free counter compared to learning-based approaches. This paper
presents a promising solution for counting objects in various scenarios without
the need for extensive data collection and counting-specific training. Code is
available at \url{https://github.com/shizenglin/training-free-object-counter}
Related papers
- A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Robust Unsupervised Crowd Counting and Localization with Adaptive
Resolution SAM [61.10712338956455]
We propose a simple yet effective crowd counting method by utilizing the Segment-Everything-Everywhere Model (SEEM)
We show that SEEM's performance in dense crowd scenes is limited, primarily due to the omission of many persons in high-density areas.
Our proposed method achieves the best unsupervised performance in crowd counting, while also being comparable to some supervised methods.
arXiv Detail & Related papers (2024-02-27T13:55:17Z) - Learning from Pseudo-labeled Segmentation for Multi-Class Object
Counting [35.652092907690694]
Class-agnostic counting (CAC) has numerous potential applications across various domains.
The goal is to count objects of an arbitrary category during testing, based on only a few annotated exemplars.
We show that the segmentation model trained on these pseudo-labeled masks can effectively localize objects of interest for an arbitrary multi-class image.
arXiv Detail & Related papers (2023-07-15T01:33:19Z) - Self-Supervised Interactive Object Segmentation Through a
Singulation-and-Grasping Approach [9.029861710944704]
We propose a robot learning approach to interact with novel objects and collect each object's training label.
The Singulation-and-Grasping (SaG) policy is trained through end-to-end reinforcement learning.
Our system achieves 70% singulation success rate in simulated cluttered scenes.
arXiv Detail & Related papers (2022-07-19T15:01:36Z) - Discovering Object Masks with Transformers for Unsupervised Semantic
Segmentation [75.00151934315967]
MaskDistill is a novel framework for unsupervised semantic segmentation.
Our framework does not latch onto low-level image cues and is not limited to object-centric datasets.
arXiv Detail & Related papers (2022-06-13T17:59:43Z) - Scaling up instance annotation via label propagation [69.8001043244044]
We propose a highly efficient annotation scheme for building large datasets with object segmentation masks.
We exploit these similarities by using hierarchical clustering on mask predictions made by a segmentation model.
We show that we obtain 1M object segmentation masks with a total annotation time of only 290 hours.
arXiv Detail & Related papers (2021-10-05T18:29:34Z) - Generating Masks from Boxes by Mining Spatio-Temporal Consistencies in
Videos [159.02703673838639]
We introduce a method for generating segmentation masks from per-frame bounding box annotations in videos.
We use our resulting accurate masks for weakly supervised training of video object segmentation (VOS) networks.
The additional data provides substantially better generalization performance leading to state-of-the-art results in both the VOS and more challenging tracking domain.
arXiv Detail & Related papers (2021-01-06T18:56:24Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Multi-task deep learning for image segmentation using recursive
approximation tasks [5.735162284272276]
Deep neural networks for segmentation usually require a massive amount of pixel-level labels which are manually expensive to create.
In this work, we develop a multi-task learning method to relax this constraint.
The network is trained on an extremely small amount of precisely segmented images and a large set of coarse labels.
arXiv Detail & Related papers (2020-05-26T21:35:26Z) - Revisiting Sequence-to-Sequence Video Object Segmentation with
Multi-Task Loss and Skip-Memory [4.343892430915579]
Video Object (VOS) is an active research area of the visual domain.
Current approaches lose objects in longer sequences, especially when the object is small or briefly occluded.
We build upon a sequence-to-sequence approach that employs an encoder-decoder architecture together with a memory module for exploiting the sequential data.
arXiv Detail & Related papers (2020-04-25T15:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.