Emergent AI Surveillance: Overlearned Person Re-Identification and Its Mitigation in Law Enforcement Context
- URL: http://arxiv.org/abs/2510.06026v1
- Date: Tue, 07 Oct 2025 15:23:16 GMT
- Title: Emergent AI Surveillance: Overlearned Person Re-Identification and Its Mitigation in Law Enforcement Context
- Authors: An Thi Nguyen, Radina Stoykova, Eric Arazo,
- Abstract summary: Generic instance search models can dramatically reduce the manual effort required to analyze vast surveillance footage during criminal investigations by retrieving specific objects of interest to law enforcement.<n>However, our research reveals an unintended emergent capability: through overlearning, these models can single out specific individuals even when trained on datasets without human subjects.<n>This capability raises concerns regarding identification and profiling of individuals based on their personal data.
- Score: 2.3124669700253553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generic instance search models can dramatically reduce the manual effort required to analyze vast surveillance footage during criminal investigations by retrieving specific objects of interest to law enforcement. However, our research reveals an unintended emergent capability: through overlearning, these models can single out specific individuals even when trained on datasets without human subjects. This capability raises concerns regarding identification and profiling of individuals based on their personal data, while there is currently no clear standard on how de-identification can be achieved. We evaluate two technical safeguards to curtail a model's person re-identification capacity: index exclusion and confusion loss. Our experiments demonstrate that combining these approaches can reduce person re-identification accuracy to below 2% while maintaining 82% of retrieval performance for non-person objects. However, we identify critical vulnerabilities in these mitigations, including potential circumvention using partial person images. These findings highlight urgent regulatory questions at the intersection of AI governance and data protection: How should we classify and regulate systems with emergent identification capabilities? And what technical standards should be required to prevent identification capabilities from developing in seemingly benign applications?
Related papers
- Privacy Preservation and Identity Tracing Prevention in AI-Driven Eye Tracking for Interactive Learning Environments [17.850299218352102]
Eye-tracking technology can aid in understanding neurodevelopmental disorders and tracing a person's identity.<n>This paper proposes a human-centered framework designed to prevent identity backtracking while preserving the pedagogical benefits of AI-powered eye tracking.
arXiv Detail & Related papers (2025-09-04T13:08:06Z) - A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage [77.83757117924995]
We propose a new framework that evaluates re-identification attacks to quantify individual privacy risks upon data release.<n>Our approach shows that seemingly innocuous auxiliary information can be used to infer sensitive attributes like age or substance use history from sanitized data.
arXiv Detail & Related papers (2025-04-28T01:16:27Z) - Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Footprints of Data in a Classifier: Understanding the Privacy Risks and Solution Strategies [0.9208007322096533]
Article 17 of the General Data Protection Regulation (Right Erasure) requires data to be permanently removed from a system to prevent potential compromise.<n>One such issue arises from the residual footprints of training data embedded within predictive models.<n>This study examines how two fundamental aspects of classifier systems - training quality and classifier training methodology - contribute to privacy vulnerabilities.
arXiv Detail & Related papers (2024-07-02T13:56:37Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Disguise without Disruption: Utility-Preserving Face De-Identification [40.484745636190034]
We introduce Disguise, a novel algorithm that seamlessly de-identifies facial images while ensuring the usability of the modified data.
Our method involves extracting and substituting depicted identities with synthetic ones, generated using variational mechanisms to maximize obfuscation and non-invertibility.
We extensively evaluate our method using multiple datasets, demonstrating a higher de-identification rate and superior consistency compared to prior approaches in various downstream tasks.
arXiv Detail & Related papers (2023-03-23T13:50:46Z) - AI-based Re-identification of Behavioral Clickstream Data [0.0]
This paper demonstrates that similar techniques can be applied to successfully re-identify individuals purely based on their behavioral patterns.
The mere resemblance of behavioral patterns between records is sufficient to correctly attribute behavioral data to identified individuals.
We also demonstrate how synthetic data can offer a viable alternative, that is shown to be resilient against our introduced AI-based re-identification attacks.
arXiv Detail & Related papers (2022-01-21T16:49:00Z) - RealGait: Gait Recognition for Person Re-Identification [79.67088297584762]
We construct a new gait dataset by extracting silhouettes from an existing video person re-identification challenge which consists of 1,404 persons walking in an unconstrained manner.
Our results suggest that recognizing people by their gait in real surveillance scenarios is feasible and the underlying gait pattern is probably the true reason why video person re-idenfification works in practice.
arXiv Detail & Related papers (2022-01-13T06:30:56Z) - Unsupervised Person Re-Identification: A Systematic Survey of Challenges
and Solutions [64.68497473454816]
Unsupervised person Re-ID has drawn increasing attention for its potential to address the scalability issue in person Re-ID.
Unsupervised person Re-ID is challenging primarily due to lacking identity labels to supervise person feature learning.
This survey review recent works on unsupervised person Re-ID from the perspective of challenges and solutions.
arXiv Detail & Related papers (2021-09-01T00:01:35Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - The P-DESTRE: A Fully Annotated Dataset for Pedestrian Detection,
Tracking, Re-Identification and Search from Aerial Devices [7.095987222706225]
This paper introduces the P-DESTRE dataset, which is the first of its kind to provide consistent ID annotations across multiple days.
We also compare the results attained by state-of-the-art pedestrian detection, tracking, reidentification and search techniques in well-known surveillance datasets.
The dataset and the full details of the empirical evaluation carried out are freely available at http://p-destre.di.ubi.pt/.
arXiv Detail & Related papers (2020-04-06T16:17:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.