Zero-shot Object Counting with Good Exemplars
- URL: http://arxiv.org/abs/2407.04948v2
- Date: Tue, 9 Jul 2024 13:34:23 GMT
- Title: Zero-shot Object Counting with Good Exemplars
- Authors: Huilin Zhu, Jingling Yuan, Zhengwei Yang, Yu Guo, Zheng Wang, Xian Zhong, Shengfeng He,
- Abstract summary: Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations.
We propose the Visual Association-based Zero-shot Object Counting (VA-Count) framework.
VA-Count consists of an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM) that synergistically refine the process of class exemplar identification while minimizing the consequences of incorrect object identification.
- Score: 35.7544908318547
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Zero-shot object counting (ZOC) aims to enumerate objects in images using only the names of object classes during testing, without the need for manual annotations. However, a critical challenge in current ZOC methods lies in their inability to identify high-quality exemplars effectively. This deficiency hampers scalability across diverse classes and undermines the development of strong visual associations between the identified classes and image content. To this end, we propose the Visual Association-based Zero-shot Object Counting (VA-Count) framework. VA-Count consists of an Exemplar Enhancement Module (EEM) and a Noise Suppression Module (NSM) that synergistically refine the process of class exemplar identification while minimizing the consequences of incorrect object identification. The EEM utilizes advanced vision-language pretaining models to discover potential exemplars, ensuring the framework's adaptability to various classes. Meanwhile, the NSM employs contrastive learning to differentiate between optimal and suboptimal exemplar pairs, reducing the negative effects of erroneous exemplars. VA-Count demonstrates its effectiveness and scalability in zero-shot contexts with superior performance on two object counting datasets.
Related papers
- An Information Compensation Framework for Zero-Shot Skeleton-based Action Recognition [49.45660055499103]
Zero-shot human skeleton-based action recognition aims to construct a model that can recognize actions outside the categories seen during training.
Previous research has focused on aligning sequences' visual and semantic spatial distributions.
We introduce a new loss function sampling method to obtain a tight and robust representation.
arXiv Detail & Related papers (2024-06-02T06:53:01Z) - Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training [84.95281245784348]
Overemphasizing co-occurrence relationships can cause the overfitting issue of the model.
We provide a causal inference framework to show that the correlative features caused by the target object and its co-occurring objects can be regarded as a mediator.
arXiv Detail & Related papers (2024-04-09T13:13:24Z) - Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning [45.25602203155762]
Self-Supervised Contrastive Learning has proven effective in deriving high-quality representations from unlabeled data.
A major challenge that hinders both unimodal and multimodal contrastive learning is feature suppression.
We propose a novel model-agnostic Multistage Contrastive Learning framework.
arXiv Detail & Related papers (2024-02-19T04:13:33Z) - Hierarchical Visual Primitive Experts for Compositional Zero-Shot
Learning [52.506434446439776]
Compositional zero-shot learning (CZSL) aims to recognize compositions with prior knowledge of known primitives (attribute and object)
We propose a simple and scalable framework called Composition Transformer (CoT) to address these issues.
Our method achieves SoTA performance on several benchmarks, including MIT-States, C-GQA, and VAW-CZSL.
arXiv Detail & Related papers (2023-08-08T03:24:21Z) - Teaching CLIP to Count to Ten [18.703050317383322]
We introduce a simple yet effective method to improve the quantitative understanding of large vision-language models (VLMs)
We propose a new counting-contrastive loss used to finetune a pre-trained VLM in tandem with its original objective.
To the best of our knowledge, this work is the first to extend CLIP's capabilities to object counting.
arXiv Detail & Related papers (2023-02-23T14:43:53Z) - Resolving Semantic Confusions for Improved Zero-Shot Detection [6.72910827751713]
We propose a generative model incorporating a triplet loss that acknowledges the degree of dissimilarity between classes.
A cyclic-consistency loss is also enforced to ensure that generated visual samples of a class highly correspond to their own semantics.
arXiv Detail & Related papers (2022-12-12T18:11:48Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Explicitly Modeling the Discriminability for Instance-Aware Visual
Object Tracking [13.311777431243296]
We propose a novel Instance-Aware Tracker (IAT) to excavate the discriminability of feature representations.
We implement two variants of the proposed IAT, including a video-level one and an object-level one.
Both versions achieve leading results against state-of-the-art methods while running at 30FPS.
arXiv Detail & Related papers (2021-10-28T11:24:01Z) - Orthogonal Projection Loss [59.61277381836491]
We develop a novel loss function termed Orthogonal Projection Loss' (OPL)
OPL directly enforces inter-class separation alongside intra-class clustering in the feature space.
OPL offers unique advantages as it does not require careful negative mining and is not sensitive to the batch size.
arXiv Detail & Related papers (2021-03-25T17:58:00Z) - FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding [14.896822373116729]
We present Few-Shot object detection via Contrastive proposals (FSCE)
FSCE is a simple yet effective approach to learning contrastive-aware object encodings that facilitate the classification of detected objects.
Our design outperforms current state-of-the-art works in any shot and all data, with up to +8.8% on standard benchmark PASCAL VOC and +2.7% on challenging benchmark.
arXiv Detail & Related papers (2021-03-10T09:15:05Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.