CountingDINO: A Training-free Pipeline for Class-Agnostic Counting using Unsupervised Backbones
- URL: http://arxiv.org/abs/2504.16570v2
- Date: Wed, 30 Apr 2025 15:44:22 GMT
- Title: CountingDINO: A Training-free Pipeline for Class-Agnostic Counting using Unsupervised Backbones
- Authors: Giacomo Pacini, Lorenzo Bianchi, Luca Ciampi, Nicola Messina, Giuseppe Amato, Fabrizio Falchi,
- Abstract summary: Class-agnostic counting (CAC) aims to estimate the number of objects in images without being restricted to predefined categories.<n>Current exemplar-based CAC methods rely heavily on labeled data for training.<n>We introduce CountingDINO, the first training-free exemplar-based CAC framework.
- Score: 7.717986156838291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class-agnostic counting (CAC) aims to estimate the number of objects in images without being restricted to predefined categories. However, while current exemplar-based CAC methods offer flexibility at inference time, they still rely heavily on labeled data for training, which limits scalability and generalization to many downstream use cases. In this paper, we introduce CountingDINO, the first training-free exemplar-based CAC framework that exploits a fully unsupervised feature extractor. Specifically, our approach employs self-supervised vision-only backbones to extract object-aware features, and it eliminates the need for annotated data throughout the entire proposed pipeline. At inference time, we extract latent object prototypes via ROI-Align from DINO features and use them as convolutional kernels to generate similarity maps. These are then transformed into density maps through a simple yet effective normalization scheme. We evaluate our approach on the FSC-147 benchmark, where we consistently outperform a baseline based on an SOTA unsupervised object detector under the same label- and training-free setting. Additionally, we achieve competitive results -- and in some cases surpass -- training-free methods that rely on supervised backbones, non-training-free unsupervised methods, as well as several fully supervised SOTA approaches. This demonstrates that label- and training-free CAC can be both scalable and effective. Code: https://lorebianchi98.github.io/CountingDINO/.
Related papers
- Clustering is back: Reaching state-of-the-art LiDAR instance segmentation without training [69.2787246878521]
We show that competitive panoptic segmentation can be achieved using only semantic labels.<n>Our method is fully explainable, and requires no learning or parameter tuning.
arXiv Detail & Related papers (2025-03-17T14:12:08Z) - Finding Dino: A Plug-and-Play Framework for Zero-Shot Detection of Out-of-Distribution Objects Using Prototypes [12.82756672393553]
PRototype-based OOD detection Without Labels (PROWL)
We present a plug-and-play framework - PRototype-based OOD detection Without Labels (PROWL)
It is an inference-based method that does not require training on the domain dataset and relies on extracting relevant features from self-supervised pre-trained models.
It achieves state-of-the-art results on the RoadAnomaly and RoadObstacle datasets provided in road driving benchmarks.
arXiv Detail & Related papers (2024-04-11T11:55:42Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Unsupervised Continual Anomaly Detection with Contrastively-learned
Prompt [80.43623986759691]
We introduce a novel Unsupervised Continual Anomaly Detection framework called UCAD.
The framework equips the UAD with continual learning capability through contrastively-learned prompts.
We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation.
arXiv Detail & Related papers (2024-01-02T03:37:11Z) - Revisiting Few-Shot Object Detection with Vision-Language Models [49.79495118650838]
We revisit the task of few-shot object detection (FSOD) in the context of recent foundational vision-language models (VLMs)
We propose Foundational FSOD, a new benchmark protocol that evaluates detectors pre-trained on any external data.
We discuss our recent CVPR 2024 Foundational FSOD competition and share insights from the community.
arXiv Detail & Related papers (2023-12-22T07:42:00Z) - AttenScribble: Attentive Similarity Learning for Scribble-Supervised
Medical Image Segmentation [5.8447004333496855]
In this paper, we present a straightforward yet effective scribble supervised learning framework.
We create a pluggable spatial self-attention module which could be attached on top of any internal feature layers of arbitrary fully convolutional network (FCN) backbone.
This attentive similarity leads to a novel regularization loss that imposes consistency between segmentation prediction and visual affinity.
arXiv Detail & Related papers (2023-12-11T18:42:18Z) - Dense Learning based Semi-Supervised Object Detection [46.885301243656045]
Semi-supervised object detection (SSOD) aims to facilitate the training and deployment of object detectors with the help of a large amount of unlabeled data.
In this paper, we propose a DenSe Learning based anchor-free SSOD algorithm.
Experiments are conducted on MS-COCO and PASCAL-VOC, and the results show that our proposed DSL method records new state-of-the-art SSOD performance.
arXiv Detail & Related papers (2022-04-15T02:31:02Z) - You Only Need End-to-End Training for Long-Tailed Recognition [8.789819609485225]
Cross-entropy loss tends to produce highly correlated features on imbalanced data.
We propose two novel modules, Block-based Relatively Balanced Batch Sampler (B3RS) and Batch Embedded Training (BET)
Experimental results on the long-tailed classification benchmarks, CIFAR-LT and ImageNet-LT, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2021-12-11T11:44:09Z) - Out-of-Scope Intent Detection with Self-Supervision and Discriminative
Training [20.242645823965145]
Out-of-scope intent detection is of practical importance in task-oriented dialogue systems.
We propose a method to train an out-of-scope intent classifier in a fully end-to-end manner by simulating the test scenario in training.
We evaluate our method extensively on four benchmark dialogue datasets and observe significant improvements over state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-16T08:17:18Z) - Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection [60.522877583407904]
Current anchor-free object detectors are quite simple and effective yet lack accurate label assignment methods.
We present Pseudo-Intersection-over-Union(Pseudo-IoU): a simple metric that brings more standardized and accurate assignment rule into anchor-free object detection frameworks.
Our method achieves comparable performance to other recent state-of-the-art anchor-free methods without bells and whistles.
arXiv Detail & Related papers (2021-04-29T02:48:47Z) - DAP: Detection-Aware Pre-training with Weak Supervision [37.336674323981285]
This paper presents a detection-aware pre-training (DAP) approach for object detection tasks.
We transform a classification dataset into a detection dataset through a weakly supervised object localization method based on Class Activation Maps.
We show that DAP can outperform the traditional classification pre-training in terms of both sample efficiency and convergence speed in downstream detection tasks including VOC and COCO.
arXiv Detail & Related papers (2021-03-30T19:48:30Z) - Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed
Self-Training [38.81973113564937]
Self-training is a standard approach to semi-supervised learning where the learner's own predictions on unlabeled data are used as supervision during training.
In this paper, we reinterpret this label assignment problem as an optimal transportation problem between examples and classes.
We demonstrate the effectiveness of our algorithm on the CIFAR-10, CIFAR-100, and SVHN datasets in comparison with FixMatch, a state-of-the-art self-training algorithm.
arXiv Detail & Related papers (2021-02-17T08:23:15Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.