Privacy Aware Person Detection in Surveillance Data
- URL: http://arxiv.org/abs/2110.15171v1
- Date: Thu, 28 Oct 2021 14:49:21 GMT
- Title: Privacy Aware Person Detection in Surveillance Data
- Authors: Sander De Coninck, Sam Leroux, Pieter Simoens
- Abstract summary: Crowd management relies on inspection of surveillance video either by operators or by object detection models.
transferring video from the camera to remote infrastructure may open the door for extracting additional information that are infringements of privacy.
In this paper, we use adversarial training to obtain a lightweight obfuscator that transforms video frames to only retain the necessary information for person detection.
- Score: 4.727475863373813
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Crowd management relies on inspection of surveillance video either by
operators or by object detection models. These models are large, making it
difficult to deploy them on resource constrained edge hardware. Instead, the
computations are often offloaded to a (third party) cloud platform. While crowd
management may be a legitimate application, transferring video from the camera
to remote infrastructure may open the door for extracting additional
information that are infringements of privacy, like person tracking or face
recognition. In this paper, we use adversarial training to obtain a lightweight
obfuscator that transforms video frames to only retain the necessary
information for person detection. Importantly, the obfuscated data can be
processed by publicly available object detectors without retraining and without
significant loss of accuracy.
Related papers
- Secure Visual Data Processing via Federated Learning [2.4374097382908477]
This paper addresses the need for privacy-preserving solutions in large-scale visual data processing.
We propose a new approach that combines object detection, federated learning and anonymization.
Our solution is evaluated against traditional centralized models, showing that while there is a slight trade-off in accuracy, the privacy benefits are substantial.
arXiv Detail & Related papers (2025-02-09T09:44:18Z) - PV-VTT: A Privacy-Centric Dataset for Mission-Specific Anomaly Detection and Natural Language Interpretation [5.0923114224599555]
We present PV-VTT (Privacy Violation Video To Text), a unique multimodal dataset aimed at identifying privacy violations.
PV-VTT provides detailed annotations for both video and text in scenarios.
This privacy-focused approach allows researchers to use the dataset while protecting participant confidentiality.
arXiv Detail & Related papers (2024-10-30T01:02:20Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Region of Interest Loss for Anonymizing Learned Image Compression [3.0936354370614607]
We show how to achieve sufficient anonymization such that human faces become unrecognizable while persons are kept detectable.
This approach enables compression and anonymization in one step on the capture device, instead of transmitting sensitive, nonanonymized data over the network.
arXiv Detail & Related papers (2024-06-09T10:36:06Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Statistical Feature-based Personal Information Detection in Mobile
Network Traffic [13.568975395946433]
In this paper, statistical features of personal information are designed to depict the occurrence patterns of personal information in the traffic.
A detector is trained based on machine learning algorithms to discover potential personal information with similar patterns.
As far as we know, this is the first work that detects personal information based on statistical features.
arXiv Detail & Related papers (2021-12-23T04:01:16Z) - Robust Privacy-Preserving Motion Detection and Object Tracking in
Encrypted Streaming Video [39.453548972987015]
We propose an efficient and robust privacy-preserving motion detection and multiple object tracking scheme for encrypted surveillance video bitstreams.
Our scheme achieves the best detection and tracking performance compared with existing works in the encrypted and compressed domain.
Our scheme can be effectively used in complex surveillance scenarios with different challenges, such as camera movement/jitter, dynamic background, and shadows.
arXiv Detail & Related papers (2021-08-30T11:58:19Z) - Do Not Deceive Your Employer with a Virtual Background: A Video
Conferencing Manipulation-Detection System [35.82676654231492]
We study the feasibility of an efficient tool to detect whether a videoconferencing user background is real.
Our experiments confirm that cross co-occurrences matrices improve the robustness of the detector against different kinds of attacks.
arXiv Detail & Related papers (2021-06-29T07:31:21Z) - Privacy-sensitive Objects Pixelation for Live Video Streaming [52.83247667841588]
We propose a novel Privacy-sensitive Objects Pixelation (PsOP) framework for automatic personal privacy filtering during live video streaming.
Our PsOP is extendable to any potential privacy-sensitive objects pixelation.
In addition to the pixelation accuracy boosting, experiments on the streaming video data we built show that the proposed PsOP can significantly reduce the over-pixelation ratio in privacy-sensitive object pixelation.
arXiv Detail & Related papers (2021-01-03T11:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.