Can poachers find animals from public camera trap images?
- URL: http://arxiv.org/abs/2106.11236v1
- Date: Mon, 21 Jun 2021 16:31:47 GMT
- Title: Can poachers find animals from public camera trap images?
- Authors: Sara Beery, Elizabeth Bondi
- Abstract summary: We investigate the robustness of geo-obfuscation for maintaining camera trap location privacy.
Simple intuitives and publicly available satellites can be used to reduce the area likely to contain the camera by 87%.
- Score: 14.61316451496861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To protect the location of camera trap data containing sensitive, high-target
species, many ecologists randomly obfuscate the latitude and longitude of the
camera when publishing their data. For example, they may publish a random
location within a 1km radius of the true camera location for each camera in
their network. In this paper, we investigate the robustness of geo-obfuscation
for maintaining camera trap location privacy, and show via a case study that a
few simple, intuitive heuristics and publicly available satellite rasters can
be used to reduce the area likely to contain the camera by 87% (assuming random
obfuscation within 1km), demonstrating that geo-obfuscation may be less
effective than previously believed.
Related papers
- CamLoPA: A Hidden Wireless Camera Localization Framework via Signal Propagation Path Analysis [59.86280992504629]
CamLoPA is a training-free wireless camera detection and localization framework.
It operates with minimal activity space constraints using low-cost commercial-off-the-shelf (COTS) devices.
It achieves 95.37% snooping camera detection accuracy and an average localization error of 17.23, under the significantly reduced activity space requirements.
arXiv Detail & Related papers (2024-09-23T16:23:50Z) - Zone Evaluation: Revealing Spatial Bias in Object Detection [69.59295428233844]
A fundamental limitation of object detectors is that they suffer from "spatial bias"
We present a new zone evaluation protocol, which measures the detection performance over zones.
For the first time, we provide numerical results, showing that the object detectors perform quite unevenly across the zones.
arXiv Detail & Related papers (2023-10-20T01:44:49Z) - Privacy-Preserving Representations are not Enough -- Recovering Scene
Content from Camera Poses [63.12979986351964]
Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service.
We show that an attacker can learn about details of a scene without any access by simply querying a localization service.
arXiv Detail & Related papers (2023-05-08T10:25:09Z) - Privacy-Preserving Visual Localization with Event Cameras [13.21898697942957]
Event cameras can potentially make robust localization due to high dynamic range and small motion blur.
We propose applying event-to-image conversion prior to localization which leads to stable localization.
In the privacy perspective, event cameras capture only a fraction of visual information compared to normal cameras.
arXiv Detail & Related papers (2022-12-04T07:22:17Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Safeguarding National Security Interests Utilizing Location-Aware Camera
Devices [0.0]
We propose a Global Positioning System-based approach to restrict the ability of smart cameras to capture and store images of sensitive areas.
Our work proposes a Global Positioning System-based approach to restrict the ability of smart cameras to capture and store images of sensitive areas.
arXiv Detail & Related papers (2022-05-06T16:06:37Z) - Deep Learning Approach Protecting Privacy in Camera-Based Critical
Applications [57.93313928219855]
We propose a deep learning approach towards protecting privacy in camera-based systems.
Our technique distinguishes between salient (visually prominent) and non-salient objects based on the intuition that the latter is unlikely to be needed by the application.
arXiv Detail & Related papers (2021-10-04T19:16:27Z) - Cross-Camera Feature Prediction for Intra-Camera Supervised Person
Re-identification across Distant Scenes [70.30052164401178]
Person re-identification (Re-ID) aims to match person images across non-overlapping camera views.
ICS-DS Re-ID uses cross-camera unpaired data with intra-camera identity labels for training.
Cross-camera feature prediction method to mine cross-camera self supervision information.
Joint learning of global-level and local-level features forms a global-local cross-camera feature prediction scheme.
arXiv Detail & Related papers (2021-07-29T11:27:50Z) - How low can you go? Privacy-preserving people detection with an
omni-directional camera [2.433293618209319]
In this work, we use a ceiling-mounted omni-directional camera to detect people in a room.
This can be used as a sensor to measure the occupancy of meeting rooms and count the amount of flex-desk working spaces available.
arXiv Detail & Related papers (2020-07-09T10:10:23Z) - The iWildCam 2020 Competition Dataset [9.537627294351292]
Camera traps enable the automatic collection of large quantities of image data.
We have recently been making strides towards automatic species classification in camera trap images.
We have prepared a challenge where the training data and test data are from different cameras spread across the globe.
The challenge is to correctly classify species in the test camera traps.
arXiv Detail & Related papers (2020-04-21T23:25:13Z) - On Localizing a Camera from a Single Image [9.049593493956008]
We show that it is possible to estimate the location of a camera from a single image taken by the camera.
We show that, using a judicious combination of projective geometry, neural networks, and crowd-sourced annotations from human workers, it is possible to position 95% of the images in our test data set to within 12 m.
arXiv Detail & Related papers (2020-03-24T05:09:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.