Congested Crowd Instance Localization with Dilated Convolutional Swin
Transformer
- URL: http://arxiv.org/abs/2108.00584v1
- Date: Mon, 2 Aug 2021 01:27:53 GMT
- Title: Congested Crowd Instance Localization with Dilated Convolutional Swin
Transformer
- Authors: Junyu Gao, Maoguo Gong, Xuelong Li
- Abstract summary: Crowd localization is a new computer vision task, evolved from crowd counting.
In this paper, we focus on how to achieve precise instance localization in high-density crowd scenes.
We propose a Dilated Convolutional Swin Transformer (DCST) for congested crowd scenes.
- Score: 119.72951028190586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Crowd localization is a new computer vision task, evolved from crowd
counting. Different from the latter, it provides more precise location
information for each instance, not just counting numbers for the whole crowd
scene, which brings greater challenges, especially in extremely congested crowd
scenes. In this paper, we focus on how to achieve precise instance localization
in high-density crowd scenes, and to alleviate the problem that the feature
extraction ability of the traditional model is reduced due to the target
occlusion, the image blur, etc. To this end, we propose a Dilated Convolutional
Swin Transformer (DCST) for congested crowd scenes. Specifically, a
window-based vision transformer is introduced into the crowd localization task,
which effectively improves the capacity of representation learning. Then, the
well-designed dilated convolutional module is inserted into some different
stages of the transformer to enhance the large-range contextual information.
Extensive experiments evidence the effectiveness of the proposed methods and
achieve state-of-the-art performance on five popular datasets. Especially, the
proposed model achieves F1-measure of 77.5\% and MAE of 84.2 in terms of
localization and counting performance, respectively.
Related papers
- Towards Grouping in Large Scenes with Occlusion-aware Spatio-temporal
Transformers [47.83631610648981]
Group detection especially for large-scale scenes has many potential applications for public safety and smart cities.
Existing methods fail to cope with frequent occlusions in large-scale scenes with multiple people.
In this paper, we propose an end-to-end framework,Transformer for group detection in large-scale scenes.
arXiv Detail & Related papers (2023-10-30T11:17:22Z) - CrowdFormer: Weakly-supervised Crowd counting with Improved
Generalizability [2.8174125805742416]
We propose a weakly-supervised method for crowd counting using a pyramid vision transformer.
Our method is comparable to the state-of-the-art on the benchmark crowd datasets.
arXiv Detail & Related papers (2022-03-07T23:10:40Z) - Boosting Crowd Counting via Multifaceted Attention [109.89185492364386]
Large-scale variations often exist within crowd images.
Neither fixed-size convolution kernel of CNN nor fixed-size attention of recent vision transformers can handle this kind of variation.
We propose a Multifaceted Attention Network (MAN) to improve transformer models in local spatial relation encoding.
arXiv Detail & Related papers (2022-03-05T01:36:43Z) - An End-to-End Transformer Model for Crowd Localization [64.15335535775883]
Crowd localization, predicting head positions, is a more practical and high-level task than simply counting.
Existing methods employ pseudo-bounding boxes or pre-designed localization maps, relying on complex post-processing to obtain the head positions.
We propose an elegant, end-to-end Crowd Localization TRansformer that solves the task in the regression-based paradigm.
arXiv Detail & Related papers (2022-02-26T05:21:30Z) - Scene-Adaptive Attention Network for Crowd Counting [31.29858034122248]
This paper proposes a scene-adaptive attention network, termed SAANet.
We design a deformable attention in-built Transformer backbone, which learns adaptive feature representations with deformable sampling locations and dynamic attention weights.
We conduct extensive experiments on four challenging crowd counting benchmarks, demonstrating that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-12-31T15:03:17Z) - PANet: Perspective-Aware Network with Dynamic Receptive Fields and
Self-Distilling Supervision for Crowd Counting [63.84828478688975]
We propose a novel perspective-aware approach called PANet to address the perspective problem.
Based on the observation that the size of the objects varies greatly in one image due to the perspective effect, we propose the dynamic receptive fields (DRF) framework.
The framework is able to adjust the receptive field by the dilated convolution parameters according to the input image, which helps the model to extract more discriminative features for each local region.
arXiv Detail & Related papers (2021-10-31T04:43:05Z) - Vision Transformers for Dense Prediction [77.34726150561087]
We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks.
Our experiments show that this architecture yields substantial improvements on dense prediction tasks.
arXiv Detail & Related papers (2021-03-24T18:01:17Z) - Crowd Scene Analysis by Output Encoding [38.69524011345539]
We propose a Compressed Output Sensing (CSOE) scheme, which casts detecting coordinates of small objects into a task of signal regression in encoding signal space.
CSOE helps to boost localization performance in circumstances where targets are highly crowded without huge scale variation.
We also develop an Adaptive Receptive Field Weighting (ARFW) module, which deals with scale variation issue.
arXiv Detail & Related papers (2020-01-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.