CamLoPA: A Hidden Wireless Camera Localization Framework via Signal Propagation Path Analysis
- URL: http://arxiv.org/abs/2409.15169v1
- Date: Mon, 23 Sep 2024 16:23:50 GMT
- Title: CamLoPA: A Hidden Wireless Camera Localization Framework via Signal Propagation Path Analysis
- Authors: Xiang Zhang, Jie Zhang, Zehua Ma, Jinyang Huang, Meng Li, Huan Yan, Peng Zhao, Zijian Zhang, Qing Guo, Tianwei Zhang, Bin Liu, Nenghai Yu,
- Abstract summary: CamLoPA is a training-free wireless camera detection and localization framework.
It operates with minimal activity space constraints using low-cost commercial-off-the-shelf (COTS) devices.
It achieves 95.37% snooping camera detection accuracy and an average localization error of 17.23, under the significantly reduced activity space requirements.
- Score: 59.86280992504629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hidden wireless cameras pose significant privacy threats, necessitating effective detection and localization methods. However, existing solutions often require spacious activity areas, expensive specialized devices, or pre-collected training data, limiting their practical deployment. To address these limitations, we introduce CamLoPA, a training-free wireless camera detection and localization framework that operates with minimal activity space constraints using low-cost commercial-off-the-shelf (COTS) devices. CamLoPA can achieve detection and localization in just 45 seconds of user activities with a Raspberry Pi board. During this short period, it analyzes the causal relationship between the wireless traffic and user movement to detect the presence of a snooping camera. Upon detection, CamLoPA employs a novel azimuth location model based on wireless signal propagation path analysis. Specifically, this model leverages the time ratio of user paths crossing the First Fresnel Zone (FFZ) to determine the azimuth angle of the camera. Then CamLoPA refines the localization by identifying the camera's quadrant. We evaluate CamLoPA across various devices and environments, demonstrating that it achieves 95.37% snooping camera detection accuracy and an average localization error of 17.23, under the significantly reduced activity space requirements. Our demo are available at https://www.youtube.com/watch?v=GKam04FzeM4.
Related papers
- PathFinder: Attention-Driven Dynamic Non-Line-of-Sight Tracking with a Mobile Robot [3.387892563308912]
We introduce a novel approach to process a sequence of dynamic successive frames in a line-of-sight (LOS) video using an attention-based neural network.
We validate the approach on in-the-wild scenes using a drone for video capture, thus demonstrating low-cost NLOS imaging in dynamic capture environments.
arXiv Detail & Related papers (2024-04-07T17:31:53Z) - A deep learning approach to predict the number of k-barriers for
intrusion detection over a circular region using wireless sensor networks [3.6748639131154315]
Wireless Sensor Networks (WSNs) can be a feasible solution for the problem of intrusion detection and surveillance at the border areas.
In this paper, we have proposed a deep learning architecture based on a fully connected feed-forward Artificial Neural Network (ANN) for the accurate prediction of the number of k-barriers for fast intrusion detection and prevention.
arXiv Detail & Related papers (2022-08-25T06:39:29Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - RADNet: A Deep Neural Network Model for Robust Perception in Moving
Autonomous Systems [8.706086688708014]
We develop a novel ranking method to rank videos based on the degree of global camera motion.
For the high ranking camera videos we show that the accuracy of action detection is decreased.
We propose an action detection pipeline that is robust to the camera motion effect and verify it empirically.
arXiv Detail & Related papers (2022-04-30T23:14:08Z) - Cross-Camera Trajectories Help Person Retrieval in a Camera Network [124.65912458467643]
Existing methods often rely on purely visual matching or consider temporal constraints but ignore the spatial information of the camera network.
We propose a pedestrian retrieval framework based on cross-camera generation, which integrates both temporal and spatial information.
To verify the effectiveness of our method, we construct the first cross-camera pedestrian trajectory dataset.
arXiv Detail & Related papers (2022-04-27T13:10:48Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Cross-Camera Feature Prediction for Intra-Camera Supervised Person
Re-identification across Distant Scenes [70.30052164401178]
Person re-identification (Re-ID) aims to match person images across non-overlapping camera views.
ICS-DS Re-ID uses cross-camera unpaired data with intra-camera identity labels for training.
Cross-camera feature prediction method to mine cross-camera self supervision information.
Joint learning of global-level and local-level features forms a global-local cross-camera feature prediction scheme.
arXiv Detail & Related papers (2021-07-29T11:27:50Z) - Holistic Grid Fusion Based Stop Line Estimation [5.5476621209686225]
Knowing where to stop in advance in an intersection is an essential parameter in controlling the longitudinal velocity of the vehicle.
Most of the existing methods in literature solely use cameras to detect stop lines, which is typically not sufficient in terms of detection range.
We propose a method that takes advantage of fused multi-sensory data including stereo camera and lidar as input and utilizes a carefully designed convolutional neural network architecture to detect stop lines.
arXiv Detail & Related papers (2020-09-18T21:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.