Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects
- URL: http://arxiv.org/abs/2212.02248v1
- Date: Fri, 2 Dec 2022 07:00:53 GMT
- Title: Counting Like Human: Anthropoid Crowd Counting on Modeling the
Similarity of Objects
- Authors: Qi Wang, Juncheng Wang, Junyu Gao, Yuan Yuan, Xuelong Li
- Abstract summary: mainstream crowd counting methods regress density map and integrate it to obtain counting results.
Inspired by this, we propose a rational and anthropoid crowd counting framework.
- Score: 92.80955339180119
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The mainstream crowd counting methods regress density map and integrate it to
obtain counting results. Since the density representation to one head accords
to its adjacent distribution, it embeds the same category objects with variant
values, while human beings counting models the invariant features namely
similarity to objects. Inspired by this, we propose a rational and anthropoid
crowd counting framework. To begin with, we leverage counting scalar as
supervision signal, which provides global and implicit guidance to similar
matters. Then, the large kernel CNN is utilized to imitate the paradigm of
human beings which models invariant knowledge firstly and slides to compare
similarity. Later, re-parameterization on pre-trained paralleled parameters is
presented to cater to the inner-class variance on similarity comparison.
Finally, the Random Scaling patches Yield (RSY) is proposed to facilitate
similarity modeling on long distance dependencies. Extensive experiments on
five challenging benchmarks in crowd counting show the proposed framework
achieves state-of-the-art.
Related papers
- Cluster-Aware Similarity Diffusion for Instance Retrieval [64.40171728912702]
Diffusion-based re-ranking is a common method used for retrieving instances by performing similarity propagation in a nearest neighbor graph.
We propose a novel Cluster-Aware Similarity (CAS) diffusion for instance retrieval.
arXiv Detail & Related papers (2024-06-04T14:19:50Z) - Towards a Path Dependent Account of Category Fluency [2.66269503676104]
We present evidence towards resolving the disagreement between each account of foraging by reformulating models as sequence generators.
We find category switch predictors do not necessarily produce human-like sequences, in fact the additional biases used by the Hills et al. (2012) model are required to improve generation quality.
arXiv Detail & Related papers (2024-05-09T16:36:56Z) - Bayesian Beta-Bernoulli Process Sparse Coding with Deep Neural Networks [11.937283219047984]
Several approximate inference methods have been proposed for deep discrete latent variable models.
We propose a non-parametric iterative algorithm for learning discrete latent representations in such deep models.
We evaluate our method across datasets with varying characteristics and compare our results to current amortized approximate inference methods.
arXiv Detail & Related papers (2023-03-14T20:50:12Z) - Parameter Decoupling Strategy for Semi-supervised 3D Left Atrium
Segmentation [0.0]
We present a novel semi-supervised segmentation model based on parameter decoupling strategy to encourage consistent predictions from diverse views.
Our method has achieved a competitive result over the state-of-the-art semisupervised methods on the Atrial Challenge dataset.
arXiv Detail & Related papers (2021-09-20T14:51:42Z) - Instance-Level Relative Saliency Ranking with Graph Reasoning [126.09138829920627]
We present a novel unified model to segment salient instances and infer relative saliency rank order.
A novel loss function is also proposed to effectively train the saliency ranking branch.
experimental results demonstrate that our proposed model is more effective than previous methods.
arXiv Detail & Related papers (2021-07-08T13:10:42Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z) - An end-to-end approach for the verification problem: learning the right
distance [15.553424028461885]
We augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
We first show it approximates a likelihood ratio which can be used for hypothesis tests.
We observe training is much simplified under the proposed approach compared to metric learning with actual distances.
arXiv Detail & Related papers (2020-02-21T18:46:06Z) - Blocked Clusterwise Regression [0.0]
We generalize previous approaches to discrete unobserved heterogeneity by allowing each unit to have multiple latent variables.
We contribute to the theory of clustering with an over-specified number of clusters and derive new convergence rates for this setting.
arXiv Detail & Related papers (2020-01-29T23:29:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.