Scale-Aware Crowd Count Network with Annotation Error Correction
- URL: http://arxiv.org/abs/2312.16771v1
- Date: Thu, 28 Dec 2023 01:36:38 GMT
- Title: Scale-Aware Crowd Count Network with Annotation Error Correction
- Authors: Yi-Kuan Hsieh, Jun-Wei Hsieh, Yu-Chee Tseng, Ming-Ching Chang, Li Xin
- Abstract summary: Traditional crowd counting networks suffer from information loss when feature maps are downsized through pooling layers.
We propose a Scale-Aware Crowd Counting Network (SACC-Net) that introduces a scale-aware'' architecture with error-correcting capabilities of noisy annotations.
- Score: 19.036693336902662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional crowd counting networks suffer from information loss when feature
maps are downsized through pooling layers, leading to inaccuracies in counting
crowds at a distance. Existing methods often assume correct annotations during
training, disregarding the impact of noisy annotations, especially in crowded
scenes. Furthermore, the use of a fixed Gaussian kernel fails to account for
the varying pixel distribution with respect to the camera distance. To overcome
these challenges, we propose a Scale-Aware Crowd Counting Network (SACC-Net)
that introduces a ``scale-aware'' architecture with error-correcting
capabilities of noisy annotations. For the first time, we {\bf simultaneously}
model labeling errors (mean) and scale variations (variance) by
spatially-varying Gaussian distributions to produce fine-grained heat maps for
crowd counting. Furthermore, the proposed adaptive Gaussian kernel variance
enables the model to learn dynamically with a low-rank approximation, leading
to improved convergence efficiency with comparable accuracy. The performance of
SACC-Net is extensively evaluated on four public datasets: UCF-QNRF, UCF CC 50,
NWPU, and ShanghaiTech A-B. Experimental results demonstrate that SACC-Net
outperforms all state-of-the-art methods, validating its effectiveness in
achieving superior crowd counting accuracy.
Related papers
- Diffusion-based Data Augmentation for Object Counting Problems [62.63346162144445]
We develop a pipeline that utilizes a diffusion model to generate extensive training data.
We are the first to generate images conditioned on a location dot map with a diffusion model.
Our proposed counting loss for the diffusion model effectively minimizes the discrepancies between the location dot map and the crowd images generated.
arXiv Detail & Related papers (2024-01-25T07:28:22Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Scale-Aware Crowd Counting Using a Joint Likelihood Density Map and
Synthetic Fusion Pyramid Network [15.882525477601183]
We develop a Synthetic Fusion Pyramid Network (SPF-Net) with a scale-aware loss function design for accurate crowd counting.
Existing crowd-counting methods assume that the training annotation points were accurate and thus ignore the fact that noisy annotations can lead to large model-learning bias and counting error.
This work is the first to properly handle such noise at multiple scales in end-to-end loss design and thus push the crowd counting state-of-the-art.
arXiv Detail & Related papers (2022-11-13T06:52:47Z) - Scale Attention for Learning Deep Face Representation: A Study Against
Visual Scale Variation [69.45176408639483]
We reform the conv layer by resorting to the scale-space theory.
We build a novel style named SCale AttentioN Conv Neural Network (textbfSCAN-CNN)
As a single-shot scheme, the inference is more efficient than multi-shot fusion.
arXiv Detail & Related papers (2022-09-19T06:35:04Z) - Wisdom of (Binned) Crowds: A Bayesian Stratification Paradigm for Crowd
Counting [16.09823718637455]
We analyze the performance of crowd counting approaches across standard datasets at per strata level and in aggregate.
Our contributions represent a nuanced, statistically balanced and fine-grained characterization of performance for crowd counting approaches.
arXiv Detail & Related papers (2021-08-19T16:50:31Z) - Direct Measure Matching for Crowd Counting [59.66286603624411]
We propose a new measure-based counting approach to regress the predicted density maps to the scattered point-annotated ground truth directly.
In this paper, we derive a semi-balanced form of Sinkhorn divergence, based on which a Sinkhorn counting loss is designed for measure matching.
arXiv Detail & Related papers (2021-07-04T06:37:33Z) - Wireless Federated Learning with Limited Communication and Differential
Privacy [21.328507360172203]
This paper investigates the role of dimensionality reduction in efficient communication and differential privacy (DP) of the local datasets at the remote users for over-the-air computation (AirComp)-based federated learning (FL) model.
arXiv Detail & Related papers (2021-06-01T15:23:12Z) - Distribution Matching for Crowd Counting [51.90971145453012]
We show that imposing Gaussians to annotations hurts generalization performance.
We propose to use Distribution Matching for crowd COUNTing (DM-Count)
In terms of Mean Absolute Error, DM-Count outperforms the previous state-of-the-art methods.
arXiv Detail & Related papers (2020-09-28T04:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.