Privacy-preserving Pedestrian Tracking using Distributed 3D LiDARs
- URL: http://arxiv.org/abs/2303.09915v3
- Date: Wed, 22 Mar 2023 00:29:34 GMT
- Title: Privacy-preserving Pedestrian Tracking using Distributed 3D LiDARs
- Authors: Masakazu Ohno, Riki Ukyo, Tatsuya Amano, Hamada Rizk and Hirozumi
Yamaguchi
- Abstract summary: We introduce a novel privacy-preserving system for pedestrian tracking in smart environments using multiple distributed LiDARs of non-overlapping views.
The system is designed to leverage LiDAR devices to track pedestrians in partially covered areas due to practical constraints.
To boost the system's robustness, we leverage a probabilistic approach to model and adapt the dynamic mobility patterns of individuals.
- Score: 0.2519906683279152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing demand for intelligent environments unleashes an extraordinary
cycle of privacy-aware applications that makes individuals' life more
comfortable and safe. Examples of these applications include pedestrian
tracking systems in large areas. Although the ubiquity of camera-based systems,
they are not a preferable solution due to the vulnerability of leaking the
privacy of pedestrians. In this paper, we introduce a novel privacy-preserving
system for pedestrian tracking in smart environments using multiple distributed
LiDARs of non-overlapping views. The system is designed to leverage LiDAR
devices to track pedestrians in partially covered areas due to practical
constraints, e.g., occlusion or cost. Therefore, the system uses the point
cloud captured by different LiDARs to extract discriminative features that are
used to train a metric learning model for pedestrian matching purposes. To
boost the system's robustness, we leverage a probabilistic approach to model
and adapt the dynamic mobility patterns of individuals and thus connect their
sub-trajectories. We deployed the system in a large-scale testbed with 70
colorless LiDARs and conducted three different experiments. The evaluation
result at the entrance hall confirms the system's ability to accurately track
the pedestrians with a 0.98 F-measure even with zero-covered areas. This result
highlights the promise of the proposed system as the next generation of
privacy-preserving tracking means in smart environments.
Related papers
- Multimodal Perception System for Real Open Environment [0.0]
The proposed system includes an embedded computation platform, cameras, ultrasonic sensors, GPS, and IMU devices.
Unlike the traditional frameworks, our system integrates multiple sensors with advanced computer vision algorithms to help users walk outside reliably.
arXiv Detail & Related papers (2024-10-10T13:53:42Z) - Open3DTrack: Towards Open-Vocabulary 3D Multi-Object Tracking [73.05477052645885]
We introduce open-vocabulary 3D tracking, which extends the scope of 3D tracking to include objects beyond predefined categories.
We propose a novel approach that integrates open-vocabulary capabilities into a 3D tracking framework, allowing for generalization to unseen object classes.
arXiv Detail & Related papers (2024-10-02T15:48:42Z) - YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking [2.5761958263376745]
We propose a person-tracking system that combines correlation filters and Intersection Over Union (IOU) constraints for robust tracking.
The proposed system quickly identifies and tracks suspect in real-time across multiple cameras.
It is computationally efficient and achieves a high F1-Score of 79% and an IOU of 59% comparable to existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-09-23T14:11:13Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Towards an Error-free Deep Occupancy Detector for Smart Camera Parking
System [0.26249027950824505]
We propose an end-to-end smart camera parking system where we provide an autonomous detecting occupancy by an object detector called OcpDet.
Our detector also provides meaningful information from contrastive modules: training and spatial knowledge, which avert false detections during inference.
We benchmark OcpDet on the existing PKLot dataset and reach competitive results compared to traditional classification solutions.
arXiv Detail & Related papers (2022-08-17T11:02:29Z) - STCrowd: A Multimodal Dataset for Pedestrian Perception in Crowded
Scenes [78.95447086305381]
Accurately detecting and tracking pedestrians in 3D space is challenging due to large variations in rotations, poses and scales.
Existing benchmarks either only provide 2D annotations, or have limited 3D annotations with low-density pedestrian distribution.
We introduce a large-scale multimodal dataset, STCrowd, to better evaluate pedestrian perception algorithms in crowded scenarios.
arXiv Detail & Related papers (2022-04-03T08:26:07Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.