Learning to Count in the Crowd from Limited Labeled Data
- URL: http://arxiv.org/abs/2007.03195v2
- Date: Wed, 8 Jul 2020 17:01:17 GMT
- Title: Learning to Count in the Crowd from Limited Labeled Data
- Authors: Vishwanath A. Sindagi, Rajeev Yasarla, Deepak Sam Babu, R. Venkatesh
Babu, Vishal M. Patel
- Abstract summary: We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
- Score: 109.2954525909007
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent crowd counting approaches have achieved excellent performance.
However, they are essentially based on fully supervised paradigm and require
large number of annotated samples. Obtaining annotations is an expensive and
labour-intensive process. In this work, we focus on reducing the annotation
efforts by learning to count in the crowd from limited number of labeled
samples while leveraging a large pool of unlabeled data. Specifically, we
propose a Gaussian Process-based iterative learning mechanism that involves
estimation of pseudo-ground truth for the unlabeled data, which is then used as
supervision for training the network. The proposed method is shown to be
effective under the reduced data (semi-supervised) settings for several
datasets like ShanghaiTech, UCF-QNRF, WorldExpo, UCSD, etc. Furthermore, we
demonstrate that the proposed method can be leveraged to enable the network in
learning to count from synthetic dataset while being able to generalize better
to real-world datasets (synthetic-to-real transfer).
Related papers
- Group Distributionally Robust Dataset Distillation with Risk
Minimization [18.07189444450016]
We introduce an algorithm that combines clustering with the minimization of a risk measure on the loss to conduct DD.
We demonstrate its effective generalization and robustness across subgroups through numerical experiments.
arXiv Detail & Related papers (2024-02-07T09:03:04Z) - Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution [62.71425232332837]
We show that training amortized models with noisy labels is inexpensive and surprisingly effective.
This approach significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches.
arXiv Detail & Related papers (2024-01-29T03:42:37Z) - Exploring the Boundaries of Semi-Supervised Facial Expression
Recognition: Learning from In-Distribution, Out-of-Distribution, and
Unconstrained Data [19.442685015494316]
We present a study on 11 of the most recent semi-supervised methods, in the context of facial expression recognition (FER)
Our investigation covers semi-supervised learning from in-distribution, out-of-distribution, unconstrained, and very small unlabelled data.
Our results demonstrate that FixMatch consistently achieves better performance on in-distribution unlabelled data, while ReMixMatch stands out among all methods for out-of-distribution, unconstrained, and scarce unlabelled data scenarios.
arXiv Detail & Related papers (2023-06-02T01:40:08Z) - Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning [10.57079240576682]
We introduce a novel Open-Set Self-Supervised Learning problem under the assumption that a large-scale unlabeled open-set is available.
In our problem setup, it is crucial to consider the distribution mismatch between the open-set and target dataset.
We demonstrate that SimCore significantly improves representation learning performance through extensive experimental settings.
arXiv Detail & Related papers (2023-03-20T13:38:29Z) - Dataset Distillation: A Comprehensive Review [76.26276286545284]
dataset distillation (DD) aims to derive a much smaller dataset containing synthetic samples, based on which the trained models yield performance comparable with those trained on the original dataset.
This paper gives a comprehensive review and summary of recent advances in DD and its application.
arXiv Detail & Related papers (2023-01-17T17:03:28Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Learning a Self-Expressive Network for Subspace Clustering [15.096251922264281]
We propose a novel framework for subspace clustering, termed Self-Expressive Network (SENet), which employs a properly designed neural network to learn a self-expressive representation of the data.
Our SENet can not only learn the self-expressive coefficients with desired properties on the training data, but also handle out-of-sample data.
In particular, SENet yields highly competitive performance on MNIST, Fashion MNIST and Extended MNIST and state-of-the-art performance on CIFAR-10.
arXiv Detail & Related papers (2021-10-08T18:06:06Z) - Semi-Automatic Data Annotation guided by Feature Space Projection [117.9296191012968]
We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation.
We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities.
Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.
arXiv Detail & Related papers (2020-07-27T17:03:50Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.