STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition
- URL: http://arxiv.org/abs/2301.03046v1
- Date: Sun, 8 Jan 2023 14:07:54 GMT
- Title: STPrivacy: Spatio-Temporal Tubelet Sparsification and Anonymization for
Privacy-preserving Action Recognition
- Authors: Ming Li, Jun Liu, Hehe Fan, Jia-Wei Liu, Jiahe Li, Mike Zheng Shou,
Jussi Keppo
- Abstract summary: We present a PPAR paradigm, i.e. spatial, performing privacy preservation from both temporal perspectives, and propose a STPrivacy framework.
For first time, our STPrivacy applies vision Transformers to PPAR and regards video as sequence of leakage-temporal tubelets.
Because there is no large-scale benchmarks, we annotate five privacy attributes for two of the most popular action recognition datasets.
- Score: 28.002605566359676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently privacy-preserving action recognition (PPAR) has been becoming an
appealing video understanding problem. Nevertheless, existing works focus on
the frame-level (spatial) privacy preservation, ignoring the privacy leakage
from a whole video and destroying the temporal continuity of actions. In this
paper, we present a novel PPAR paradigm, i.e., performing privacy preservation
from both spatial and temporal perspectives, and propose a STPrivacy framework.
For the first time, our STPrivacy applies vision Transformers to PPAR and
regards a video as a sequence of spatio-temporal tubelets, showing outstanding
advantages over previous convolutional methods. Specifically, our STPrivacy
adaptively treats privacy-containing tubelets in two different manners. The
tubelets irrelevant to actions are directly abandoned, i.e., sparsification,
and not published for subsequent tasks. In contrast, those highly involved in
actions are anonymized, i.e., anonymization, to remove private information.
These two transformation mechanisms are complementary and simultaneously
optimized in our unified framework. Because there is no large-scale benchmarks,
we annotate five privacy attributes for two of the most popular action
recognition datasets, i.e., HMDB51 and UCF101, and conduct extensive
experiments on them. Moreover, to verify the generalization ability of our
STPrivacy, we further introduce a privacy-preserving facial expression
recognition task and conduct experiments on a large-scale video facial
attributes dataset, i.e., Celeb-VHQ. The thorough comparisons and visualization
analysis demonstrate our significant superiority over existing works. The
appendix contains more details and visualizations.
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration [18.11846784025521]
PrivacyRestore is a plug-and-play method to protect the privacy of user inputs during inference.
We create three datasets, covering medical and legal domains, to evaluate the effectiveness of PrivacyRestore.
arXiv Detail & Related papers (2024-06-03T14:57:39Z) - Preserving Node-level Privacy in Graph Neural Networks [8.823710998526705]
We propose a solution that addresses the issue of node-level privacy in Graph Neural Networks (GNNs)
Our protocol consists of two main components: 1) a sampling routine called HeterPoisson, which employs a specialized node sampling strategy and a series of tailored operations to generate a batch of sub-graphs with desired properties, and 2) a randomization routine that utilizes symmetric Laplace noise instead of the commonly used Gaussian noise.
Our protocol enables GNN learning with good performance, as demonstrated by experiments on five real-world datasets.
arXiv Detail & Related papers (2023-11-12T16:21:29Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Fairly Private: Investigating The Fairness of Visual Privacy
Preservation Algorithms [1.5293427903448025]
This paper investigates the fairness of commonly used visual privacy preservation algorithms.
Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
arXiv Detail & Related papers (2023-01-12T13:40:38Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.