Surveilling Surveillance: Estimating the Prevalence of Surveillance
Cameras with Street View Data
- URL: http://arxiv.org/abs/2105.01764v1
- Date: Tue, 4 May 2021 21:06:01 GMT
- Title: Surveilling Surveillance: Estimating the Prevalence of Surveillance
Cameras with Street View Data
- Authors: Hao Sheng, Keniel Yao, Sharad Goel
- Abstract summary: We build a camera detection model and apply it to 1.6 million street view images sampled from 10 large U.S. cities and 6 other major cities around the world.
After adjusting for the estimated recall of our model, we are able to estimate the density of surveillance cameras visible from the road.
In a detailed analysis of the 10 U.S. cities, we find that cameras are concentrated in commercial, industrial, and mixed zones, and in neighborhoods with higher shares of non-white residents.
- Score: 13.77902229604303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The use of video surveillance in public spaces -- both by government agencies
and by private citizens -- has attracted considerable attention in recent
years, particularly in light of rapid advances in face-recognition technology.
But it has been difficult to systematically measure the prevalence and
placement of cameras, hampering efforts to assess the implications of
surveillance on privacy and public safety. Here we present a novel approach for
estimating the spatial distribution of surveillance cameras: applying computer
vision algorithms to large-scale street view image data. Specifically, we build
a camera detection model and apply it to 1.6 million street view images sampled
from 10 large U.S. cities and 6 other major cities around the world, with
positive model detections verified by human experts. After adjusting for the
estimated recall of our model, and accounting for the spatial coverage of our
sampled images, we are able to estimate the density of surveillance cameras
visible from the road. Across the 16 cities we consider, the estimated number
of surveillance cameras per linear kilometer ranges from 0.1 (in Seattle) to
0.9 (in Seoul). In a detailed analysis of the 10 U.S. cities, we find that
cameras are concentrated in commercial, industrial, and mixed zones, and in
neighborhoods with higher shares of non-white residents -- a pattern that
persists even after adjusting for land use. These results help inform ongoing
discussions on the use of surveillance technology, including its potential
disparate impacts on communities of color.
Related papers
- Analysis of Unstructured High-Density Crowded Scenes for Crowd Monitoring [55.2480439325792]
We are interested in developing an automated system for detection of organized movements in human crowds.
Computer vision algorithms can extract information from videos of crowded scenes.
We can estimate the number of participants in an organized cohort.
arXiv Detail & Related papers (2024-08-06T22:09:50Z) - Bullying10K: A Large-Scale Neuromorphic Dataset towards
Privacy-Preserving Bullying Recognition [8.6837371869842]
We leverage Dynamic Vision Sensors (DVS) cameras to detect violent incidents and preserve privacy since it captures pixel brightness variations instead of static imagery.
With 10,000 event segments, totaling 12 billion events and 255 GB of data, Bullying10K contributes significantly by balancing violence detection and personal privacy persevering.
It will serve as a valuable resource for training and developing privacy-protecting video systems.
arXiv Detail & Related papers (2023-06-20T13:59:20Z) - CCTV-Gun: Benchmarking Handgun Detection in CCTV Images [59.24281591714385]
Gun violence is a critical security problem, and it is imperative for the computer vision community to develop effective gun detection algorithms.
detecting guns in real-world CCTV images remains a challenging and under-explored task.
We present a benchmark, called textbfCCTV-Gun, which addresses the challenges of detecting handguns in real-world CCTV images.
arXiv Detail & Related papers (2023-03-19T16:17:35Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - CCTV-Exposure: An open-source system for measuring user's privacy
exposure to mapped CCTV cameras based on geo-location (Extended Version) [0.90238471756546]
We present CCTV-Exposure, the first CCTV-aware solution to evaluate potential privacy exposure to closed-circuit television (CCTV) cameras.
The objective was to develop a toolset for quantifying human exposure to CCTV cameras from a privacy perspective.
arXiv Detail & Related papers (2022-07-02T14:43:44Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - On the Complexity of Object Detection on Real-world Public
Transportation Images for Social Distancing Measurement [0.8347190888362194]
Social distancing in public spaces has become an essential aspect in helping to reduce the impact of the COVID-19 pandemic.
There has been no study of social distance measurement on public transport.
We benchmark several state-of-the-art object detection algorithms using real-world footage taken from the London Underground and bus network.
arXiv Detail & Related papers (2022-02-14T11:47:26Z) - BEV-Net: Assessing Social Distancing Compliance by Joint People
Localization and Geometric Reasoning [77.08836528980248]
Social distancing, an essential public health measure, has gained significant attention since the outbreak of the COVID-19 pandemic.
In this work, the problem of visual social distancing compliance assessment in busy public areas with wide field-of-view cameras is considered.
A dataset of crowd scenes with people annotations under a bird's eye view (BEV) and ground truth for metric distances is introduced.
A multi-branch network, BEV-Net, is proposed to localize individuals in world coordinates and identify high-risk regions where social distancing is violated.
arXiv Detail & Related papers (2021-10-10T23:56:37Z) - Safety-Oriented Pedestrian Motion and Scene Occupancy Forecasting [91.69900691029908]
We advocate for predicting both the individual motions as well as the scene occupancy map.
We propose a Scene-Actor Graph Neural Network (SA-GNN) which preserves the relative spatial information of pedestrians.
On two large-scale real-world datasets, we showcase that our scene-occupancy predictions are more accurate and better calibrated than those from state-of-the-art motion forecasting methods.
arXiv Detail & Related papers (2021-01-07T06:08:21Z) - Analyzing Worldwide Social Distancing through Large-Scale Computer
Vision [2.9933334099811546]
In order to contain the COVID-19 pandemic, countries around the world have introduced social distancing guidelines.
Traditional observational methods such as in-person reporting is dangerous because observers may risk infection.
This research team has created methods that can discover thousands of network cameras worldwide.
arXiv Detail & Related papers (2020-08-27T20:20:11Z) - Unsupervised Vehicle Counting via Multiple Camera Domain Adaptation [9.730985797769764]
Monitoring vehicle flows in cities is crucial to improve the urban environment and quality of life of citizens.
Current technologies for vehicle counting in images hinge on large quantities of annotated data, preventing their scalability to city-scale as new cameras are added to the system.
We propose and discuss a new methodology to design image-based vehicle density estimators with few labeled data via multiple camera domain adaptations.
arXiv Detail & Related papers (2020-04-20T13:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.