Active Crowd Counting with Limited Supervision
- URL: http://arxiv.org/abs/2007.06334v2
- Date: Tue, 14 Jul 2020 21:28:20 GMT
- Title: Active Crowd Counting with Limited Supervision
- Authors: Zhen Zhao, Miaojing Shi, Xiaoxiao Zhao, Li Li
- Abstract summary: We present an active learning framework which enables accurate crowd counting with limited supervision.
We first introduce an active labeling strategy to annotate the most informative images in the dataset and learn the counting model upon them.
In the last cycle when the labeling budget is met, the large amount of unlabeled data are also utilized.
- Score: 13.09054893296829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To learn a reliable people counter from crowd images, head center annotations
are normally required. Annotating head centers is however a laborious and
tedious process in dense crowds. In this paper, we present an active learning
framework which enables accurate crowd counting with limited supervision: given
a small labeling budget, instead of randomly selecting images to annotate, we
first introduce an active labeling strategy to annotate the most informative
images in the dataset and learn the counting model upon them. The process is
repeated such that in every cycle we select the samples that are diverse in
crowd density and dissimilar to previous selections. In the last cycle when the
labeling budget is met, the large amount of unlabeled data are also utilized: a
distribution classifier is introduced to align the labeled data with unlabeled
data; furthermore, we propose to mix up the distribution labels and latent
representations of data in the network to particularly improve the distribution
alignment in-between training samples. We follow the popular density estimation
pipeline for crowd counting. Extensive experiments are conducted on standard
benchmarks i.e. ShanghaiTech, UCF CC 50, MAll, TRANCOS, and DCC. By annotating
limited number of images (e.g. 10% of the dataset), our method reaches levels
of performance not far from the state of the art which utilize full annotations
of the dataset.
Related papers
- Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Robust Zero-Shot Crowd Counting and Localization With Adaptive Resolution SAM [55.93697196726016]
We propose a simple yet effective crowd counting method by utilizing the Segment-Everything-Everywhere Model (SEEM)
We show that SEEM's performance in dense crowd scenes is limited, primarily due to the omission of many persons in high-density areas.
Our proposed method achieves the best unsupervised performance in crowd counting, while also being comparable to some supervised methods.
arXiv Detail & Related papers (2024-02-27T13:55:17Z) - Cold PAWS: Unsupervised class discovery and addressing the cold-start
problem for semi-supervised learning [0.30458514384586394]
We propose a novel approach based on well-established self-supervised learning, clustering, and manifold learning techniques.
We test our approach using several publicly available datasets, namely CIFAR10, Imagenette, DeepWeeds, and EuroSAT.
We obtain superior performance for the datasets considered with a much simpler approach compared to other methods in the literature.
arXiv Detail & Related papers (2023-05-17T09:17:59Z) - Exploiting Diversity of Unlabeled Data for Label-Efficient
Semi-Supervised Active Learning [57.436224561482966]
Active learning is a research area that addresses the issues of expensive labeling by selecting the most important samples for labeling.
We introduce a new diversity-based initial dataset selection algorithm to select the most informative set of samples for initial labeling in the active learning setting.
Also, we propose a novel active learning query strategy, which uses diversity-based sampling on consistency-based embeddings.
arXiv Detail & Related papers (2022-07-25T16:11:55Z) - Improving Contrastive Learning on Imbalanced Seed Data via Open-World
Sampling [96.8742582581744]
We present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK)
MAK follows three simple principles: tailness, proximity, and diversity.
We demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features.
arXiv Detail & Related papers (2021-11-01T15:09:41Z) - End-to-End Learning from Noisy Crowd to Supervised Machine Learning
Models [6.278267504352446]
We advocate using hybrid intelligence, i.e., combining deep models and human experts, to design an end-to-end learning framework from noisy crowd-sourced data.
We show how label aggregation can benefit from estimating the annotators' confusion matrix to improve the learning process.
We demonstrate the effectiveness of our strategies on several image datasets, using SVM and deep neural networks.
arXiv Detail & Related papers (2020-11-13T09:48:30Z) - Completely Self-Supervised Crowd Counting via Distribution Matching [92.09218454377395]
We propose a complete self-supervision approach to training models for dense crowd counting.
The only input required to train, apart from a large set of unlabeled crowd images, is the approximate upper limit of the crowd count.
Our method dwells on the idea that natural crowds follow a power law distribution, which could be leveraged to yield error signals for backpropagation.
arXiv Detail & Related papers (2020-09-14T13:20:12Z) - Learning to Count in the Crowd from Limited Labeled Data [109.2954525909007]
We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
arXiv Detail & Related papers (2020-07-07T04:17:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.