SPAct: Self-supervised Privacy Preservation for Action Recognition
- URL: http://arxiv.org/abs/2203.15205v1
- Date: Tue, 29 Mar 2022 02:56:40 GMT
- Title: SPAct: Self-supervised Privacy Preservation for Action Recognition
- Authors: Ishan Rajendrakumar Dave, Chen Chen, Mubarak Shah
- Abstract summary: Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
- Score: 73.79886509500409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual private information leakage is an emerging key issue for the fast
growing applications of video understanding like activity recognition. Existing
approaches for mitigating privacy leakage in action recognition require privacy
labels along with the action labels from the video dataset. However, annotating
frames of video dataset for privacy labels is not feasible. Recent developments
of self-supervised learning (SSL) have unleashed the untapped potential of the
unlabeled data. For the first time, we present a novel training framework which
removes privacy information from input video in a self-supervised manner
without requiring privacy labels. Our training framework consists of three main
components: anonymization function, self-supervised privacy removal branch, and
action recognition branch. We train our framework using a minimax optimization
strategy to minimize the action recognition cost function and maximize the
privacy cost function through a contrastive self-supervised loss. Employing
existing protocols of known-action and privacy attributes, our framework
achieves a competitive action-privacy trade-off to the existing
state-of-the-art supervised methods. In addition, we introduce a new protocol
to evaluate the generalization of learned the anonymization function to
novel-action and privacy attributes and show that our self-supervised framework
outperforms existing supervised methods. Code available at:
https://github.com/DAVEISHAN/SPAct
Related papers
- Can Language Models be Instructed to Protect Personal Information? [30.187731765653428]
We introduce PrivQA -- a benchmark to assess the privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario.
We find that adversaries can easily circumvent these protections with simple jailbreaking methods through textual and/or image inputs.
We believe PrivQA has the potential to support the development of new models with improved privacy protections, as well as the adversarial robustness of these protections.
arXiv Detail & Related papers (2023-10-03T17:30:33Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition [28.002605566359676]
We present a PPAR paradigm, i.e. spatial, performing privacy preservation from both temporal perspectives, and propose a STPrivacy framework.
For first time, our STPrivacy applies vision Transformers to PPAR and regards video as sequence of leakage-temporal tubelets.
Because there is no large-scale benchmarks, we annotate five privacy attributes for two of the most popular action recognition datasets.
arXiv Detail & Related papers (2023-01-08T14:07:54Z) - Privacy-Preserving Action Recognition via Motion Difference Quantization [22.31448780032675]
This paper proposes a simple, yet robust privacy-preserving encoder called BDQ.
It is composed of three modules: Blur, Difference, and Quantization.
Experiments on three benchmark datasets show that the proposed encoder design can achieve state-of-the-art trade-off.
arXiv Detail & Related papers (2022-08-04T05:03:27Z) - Learnable Privacy-Preserving Anonymization for Pedestrian Images [27.178354411900127]
This paper studies a novel privacy-preserving anonymization problem for pedestrian images.
It preserves personal identity information (PII) for authorized models and prevents PII from being recognized by third parties.
We propose a joint learning reversible anonymization framework, which can reversibly generate full-body anonymous images.
arXiv Detail & Related papers (2022-07-24T07:04:16Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Privacy-Preserving Video Classification with Convolutional Neural
Networks [8.51142156817993]
We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks.
We evaluate our proposed solution in an application for private human emotion recognition.
arXiv Detail & Related papers (2021-02-06T05:05:31Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.