Privacy-Preserving Action Recognition via Motion Difference Quantization
- URL: http://arxiv.org/abs/2208.02459v1
- Date: Thu, 4 Aug 2022 05:03:27 GMT
- Title: Privacy-Preserving Action Recognition via Motion Difference Quantization
- Authors: Sudhakar Kumawat and Hajime Nagahara
- Abstract summary: This paper proposes a simple, yet robust privacy-preserving encoder called BDQ.
It is composed of three modules: Blur, Difference, and Quantization.
Experiments on three benchmark datasets show that the proposed encoder design can achieve state-of-the-art trade-off.
- Score: 22.31448780032675
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The widespread use of smart computer vision systems in our personal spaces
has led to an increased consciousness about the privacy and security risks that
these systems pose. On the one hand, we want these systems to assist in our
daily lives by understanding their surroundings, but on the other hand, we want
them to do so without capturing any sensitive information. Towards this
direction, this paper proposes a simple, yet robust privacy-preserving encoder
called BDQ for the task of privacy-preserving human action recognition that is
composed of three modules: Blur, Difference, and Quantization. First, the input
scene is passed to the Blur module to smoothen the edges. This is followed by
the Difference module to apply a pixel-wise intensity subtraction between
consecutive frames to highlight motion features and suppress obvious high-level
privacy attributes. Finally, the Quantization module is applied to the motion
difference frames to remove the low-level privacy attributes. The BDQ
parameters are optimized in an end-to-end fashion via adversarial training such
that it learns to allow action recognition attributes while inhibiting privacy
attributes. Our experiments on three benchmark datasets show that the proposed
encoder design can achieve state-of-the-art trade-off when compared with
previous works. Furthermore, we show that the trade-off achieved is at par with
the DVS sensor-based event cameras. Code available at:
https://github.com/suakaw/BDQ_PrivacyAR.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Robust Representation Learning for Privacy-Preserving Machine Learning:
A Multi-Objective Autoencoder Approach [0.9831489366502302]
We propose a robust representation learning framework for privacy-preserving machine learning (ppML)
Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data.
With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form.
arXiv Detail & Related papers (2023-09-08T16:41:25Z) - STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition [28.002605566359676]
We present a PPAR paradigm, i.e. spatial, performing privacy preservation from both temporal perspectives, and propose a STPrivacy framework.
For first time, our STPrivacy applies vision Transformers to PPAR and regards video as sequence of leakage-temporal tubelets.
Because there is no large-scale benchmarks, we annotate five privacy attributes for two of the most popular action recognition datasets.
arXiv Detail & Related papers (2023-01-08T14:07:54Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Tempered Sigmoid Activations for Deep Learning with Differential Privacy [33.574715000662316]
We show that the choice of activation function is central to bounding the sensitivity of privacy-preserving deep learning.
We achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the learning procedure fundamentals.
arXiv Detail & Related papers (2020-07-28T13:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.