A Privacy Enhancing Technique to Evade Detection by Street Video Cameras Without Using Adversarial Accessories
- URL: http://arxiv.org/abs/2501.15653v1
- Date: Sun, 26 Jan 2025 19:29:49 GMT
- Title: A Privacy Enhancing Technique to Evade Detection by Street Video Cameras Without Using Adversarial Accessories
- Authors: Jacob Shams, Ben Nassi, Satoru Koda, Asaf Shabtai, Yuval Elovici,
- Abstract summary: We leverage a novel side effect of this gap between the laboratory and the real world: location-based weakness in pedestrian detection.
We show how privacy-concerned pedestrians can leverage blind spots to evade detection by constructing a minimum confidence path between two points in a scene.
We propose a novel countermeasure to improve the confidence of pedestrian detectors in blind spots, raising the max/average confidence of paths generated by our technique by 0.09 and 0.05, respectively.
- Score: 28.431929351359734
- License:
- Abstract: In this paper, we propose a privacy-enhancing technique leveraging an inherent property of automatic pedestrian detection algorithms, namely, that the training of deep neural network (DNN) based methods is generally performed using curated datasets and laboratory settings, while the operational areas of these methods are dynamic real-world environments. In particular, we leverage a novel side effect of this gap between the laboratory and the real world: location-based weakness in pedestrian detection. We demonstrate that the position (distance, angle, height) of a person, and ambient light level, directly impact the confidence of a pedestrian detector when detecting the person. We then demonstrate that this phenomenon is present in pedestrian detectors observing a stationary scene of pedestrian traffic, with blind spot areas of weak detection of pedestrians with low confidence. We show how privacy-concerned pedestrians can leverage these blind spots to evade detection by constructing a minimum confidence path between two points in a scene, reducing the maximum confidence and average confidence of the path by up to 0.09 and 0.13, respectively, over direct and random paths through the scene. To counter this phenomenon, and force the use of more costly and sophisticated methods to leverage this vulnerability, we propose a novel countermeasure to improve the confidence of pedestrian detectors in blind spots, raising the max/average confidence of paths generated by our technique by 0.09 and 0.05, respectively. In addition, we demonstrate that our countermeasure improves a Faster R-CNN-based pedestrian detector's TPR and average true positive confidence by 0.03 and 0.15, respectively.
Related papers
- Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Sparse Sampling is All You Need for Fast Wrong-way Cycling Detection in CCTV Videos [36.1376919510996]
This paper formulates a problem of detecting wrong-way cycling ratios in CCTV videos.
We propose a sparse sampling method called WWC-Predictor to efficiently solve this problem.
Our approach achieves an average error rate of a mere 1.475% while taking only 19.12% GPU time.
arXiv Detail & Related papers (2024-05-12T14:16:05Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Enhancing Infrared Small Target Detection Robustness with Bi-Level
Adversarial Framework [61.34862133870934]
We propose a bi-level adversarial framework to promote the robustness of detection in the presence of distinct corruptions.
Our scheme remarkably improves 21.96% IOU across a wide array of corruptions and notably promotes 4.97% IOU on the general benchmark.
arXiv Detail & Related papers (2023-09-03T06:35:07Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Holistic Grid Fusion Based Stop Line Estimation [5.5476621209686225]
Knowing where to stop in advance in an intersection is an essential parameter in controlling the longitudinal velocity of the vehicle.
Most of the existing methods in literature solely use cameras to detect stop lines, which is typically not sufficient in terms of detection range.
We propose a method that takes advantage of fused multi-sensory data including stereo camera and lidar as input and utilizes a carefully designed convolutional neural network architecture to detect stop lines.
arXiv Detail & Related papers (2020-09-18T21:29:06Z) - Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners [5.4960756528016335]
We show that approaching vehicles behind blind corners can be detected by sound before such vehicles enter in line-of-sight.
We have equipped a research vehicle with a roof-mounted microphone array, and show on data collected with this sensor setup.
A novel method is presented to classify if and from what direction a vehicle is approaching before it is visible.
arXiv Detail & Related papers (2020-07-30T20:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.