Curriculum for Crowd Counting -- Is it Worthy?
- URL: http://arxiv.org/abs/2401.07586v1
- Date: Mon, 15 Jan 2024 10:46:01 GMT
- Title: Curriculum for Crowd Counting -- Is it Worthy?
- Authors: Muhammad Asif Khan, Hamid Menouar, Ridha Hamila
- Abstract summary: A notably intuitive technique called Curriculum Learning (CL) has been introduced recently for training deep learning models.
In this work, we investigate the impact of curriculum learning in crowd counting using the density estimation method.
Our experiments show that curriculum learning improves the model learning performance and shortens the convergence time.
- Score: 2.462045767312954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in deep learning techniques have achieved remarkable
performance in several computer vision problems. A notably intuitive technique
called Curriculum Learning (CL) has been introduced recently for training deep
learning models. Surprisingly, curriculum learning achieves significantly
improved results in some tasks but marginal or no improvement in others. Hence,
there is still a debate about its adoption as a standard method to train
supervised learning models. In this work, we investigate the impact of
curriculum learning in crowd counting using the density estimation method. We
performed detailed investigations by conducting 112 experiments using six
different CL settings using eight different crowd models. Our experiments show
that curriculum learning improves the model learning performance and shortens
the convergence time.
Related papers
- Machine Unlearning in Contrastive Learning [3.6218162133579694]
We introduce a novel gradient constraint-based approach for training the model to effectively achieve machine unlearning.
Our approach demonstrates proficient performance not only on contrastive learning models but also on supervised learning models.
arXiv Detail & Related papers (2024-05-12T16:09:01Z) - Cup Curriculum: Curriculum Learning on Model Capacity [1.0878040851638]
Curriculum learning aims to increase the performance of a learner on a given task by applying a specialized learning strategy.
This strategy focuses on either the dataset, the task, or the model.
To close this gap, we propose the cup curriculum.
We empirically evaluate different strategies of the cup curriculum and show that it outperforms early stopping reliably while exhibiting a high resilience to overfitting.
arXiv Detail & Related papers (2023-11-07T12:55:31Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - SLIP: Self-supervision meets Language-Image Pre-training [79.53764315471543]
We study whether self-supervised learning can aid in the use of language supervision for visual representation learning.
We introduce SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training.
We find that SLIP enjoys the best of both worlds: better performance than self-supervision and language supervision.
arXiv Detail & Related papers (2021-12-23T18:07:13Z) - RvS: What is Essential for Offline RL via Supervised Learning? [77.91045677562802]
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL.
In every environment suite we consider simply maximizing likelihood with two-layer feedforward is competitive.
They also probe the limits of existing RvS methods, which are comparatively weak on random data.
arXiv Detail & Related papers (2021-12-20T18:55:16Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Semi-Supervising Learning, Transfer Learning, and Knowledge Distillation
with SimCLR [2.578242050187029]
Recent breakthroughs in the field of semi-supervised learning have achieved results that match state-of-the-art traditional supervised learning methods.
SimCLR is the current state-of-the-art semi-supervised learning framework for computer vision.
arXiv Detail & Related papers (2021-08-02T01:37:39Z) - Crop-Transform-Paste: Self-Supervised Learning for Visual Tracking [137.26381337333552]
In this work, we develop the Crop-Transform-Paste operation, which is able to synthesize sufficient training data.
Since the object state is known in all synthesized data, existing deep trackers can be trained in routine ways without human annotation.
arXiv Detail & Related papers (2021-06-21T07:40:34Z) - An Analytical Theory of Curriculum Learning in Teacher-Student Networks [10.303947049948107]
In humans and animals, curriculum learning is critical to rapid learning and effective pedagogy.
In machine learning, curricula are not widely used and empirically often yield only moderate benefits.
arXiv Detail & Related papers (2021-06-15T11:48:52Z) - Curriculum Learning: A Survey [65.31516318260759]
Curriculum learning strategies have been successfully employed in all areas of machine learning.
We construct a taxonomy of curriculum learning approaches by hand, considering various classification criteria.
We build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm.
arXiv Detail & Related papers (2021-01-25T20:08:32Z) - When Do Curricula Work? [26.072472732516335]
ordered learning has been suggested as improvements to the standard i.i.d. training.
We conduct experiments over thousands of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random-curriculum.
We find that curricula have only marginal benefits, and that randomly ordered samples perform as well or better than curricula and anti-curricula.
arXiv Detail & Related papers (2020-12-05T19:41:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.