Mahalanobis Distance-based Multi-view Optimal Transport for Multi-view Crowd Localization
- URL: http://arxiv.org/abs/2409.01726v1
- Date: Tue, 3 Sep 2024 09:10:51 GMT
- Title: Mahalanobis Distance-based Multi-view Optimal Transport for Multi-view Crowd Localization
- Authors: Qi Zhang, Kaiyi Zhang, Antoni B. Chan, Hui Huang,
- Abstract summary: We propose a novel Mahalanobis distance-based multi-view optimal transport loss specifically designed for multi-view crowd localization.
Experiments demonstrate the advantage of the proposed method over density map-based or common Euclidean distance-based optimal transport loss.
- Score: 50.69184586442379
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view crowd localization predicts the ground locations of all people in the scene. Typical methods usually estimate the crowd density maps on the ground plane first, and then obtain the crowd locations. However, the performance of existing methods is limited by the ambiguity of the density maps in crowded areas, where local peaks can be smoothed away. To mitigate the weakness of density map supervision, optimal transport-based point supervision methods have been proposed in the single-image crowd localization tasks, but have not been explored for multi-view crowd localization yet. Thus, in this paper, we propose a novel Mahalanobis distance-based multi-view optimal transport (M-MVOT) loss specifically designed for multi-view crowd localization. First, we replace the Euclidean-based transport cost with the Mahalanobis distance, which defines elliptical iso-contours in the cost function whose long-axis and short-axis directions are guided by the view ray direction. Second, the object-to-camera distance in each view is used to adjust the optimal transport cost of each location further, where the wrong predictions far away from the camera are more heavily penalized. Finally, we propose a strategy to consider all the input camera views in the model loss (M-MVOT) by computing the optimal transport cost for each ground-truth point based on its closest camera. Experiments demonstrate the advantage of the proposed method over density map-based or common Euclidean distance-based optimal transport loss on several multi-view crowd localization datasets. Project page: https://vcc.tech/research/2024/MVOT.
Related papers
- Monocular BEV Perception of Road Scenes via Front-to-Top View Projection [57.19891435386843]
We present a novel framework that reconstructs a local map formed by road layout and vehicle occupancy in the bird's-eye view.
Our model runs at 25 FPS on a single GPU, which is efficient and applicable for real-time panorama HD map reconstruction.
arXiv Detail & Related papers (2022-11-15T13:52:41Z) - Multiview Detection with Cardboard Human Modeling [23.072791405965415]
We propose a new pedestrian representation scheme based on human point clouds modeling.
Specifically, using ray tracing for holistic human depth estimation, we model pedestrians as upright, thin cardboard point clouds on the ground.
arXiv Detail & Related papers (2022-07-05T12:47:26Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - LDC-Net: A Unified Framework for Localization, Detection and Counting in
Dense Crowds [103.8635206945196]
The rapid development in visual crowd analysis shows a trend to count people by positioning or even detecting, rather than simply summing a density map.
Some recent work on crowd localization and detection has two limitations: 1) The typical detection methods can not handle the dense crowds and a large variation in scale; 2) The density map methods suffer from performance deficiency in position and box prediction, especially in high density or large-size crowds.
arXiv Detail & Related papers (2021-10-10T07:55:44Z) - Cascaded Residual Density Network for Crowd Counting [63.714719914701014]
We propose a novel Cascaded Residual Density Network (CRDNet) in a coarse-to-fine approach to generate the high-quality density map for crowd counting more accurately.
A novel additional local count loss is presented to refine the accuracy of crowd counting.
arXiv Detail & Related papers (2021-07-29T03:07:11Z) - Coarse-to-fine Semantic Localization with HD Map for Autonomous Driving
in Structural Scenes [1.1024591739346292]
We propose a cost-effective vehicle localization system with HD map for autonomous driving using cameras as primary sensors.
We formulate vision-based localization as a data association problem that maps visual semantics to landmarks in HD map.
We evaluate our method on two datasets and demonstrate that the proposed approach yields promising localization results in different driving scenarios.
arXiv Detail & Related papers (2021-07-06T11:58:55Z) - Tracking-by-Counting: Using Network Flows on Crowd Density Maps for
Tracking Multiple Targets [96.98888948518815]
State-of-the-art multi-object tracking(MOT) methods follow the tracking-by-detection paradigm.
We propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes.
arXiv Detail & Related papers (2020-07-18T19:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.