Anonymization for Skeleton Action Recognition
- URL: http://arxiv.org/abs/2111.15129v1
- Date: Tue, 30 Nov 2021 05:13:20 GMT
- Title: Anonymization for Skeleton Action Recognition
- Authors: Myeonghyeon Kim, Zhenyue Qin, Yang Liu, Dongwoo Kim
- Abstract summary: We propose two variants of anonymization algorithms to protect the potential privacy leakage from the skeleton dataset.
Experimental results show that the anonymized dataset can reduce the risk of privacy leakage while having marginal effects on the action recognition performance.
- Score: 6.772319578308409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The skeleton-based action recognition attracts practitioners and researchers
due to the lightweight, compact nature of datasets. Compared with
RGB-video-based action recognition, skeleton-based action recognition is a
safer way to protect the privacy of subjects while having competitive
recognition performance. However, due to the improvements of skeleton
estimation algorithms as well as motion- and depth-sensors, more details of
motion characteristics can be preserved in the skeleton dataset, leading to a
potential privacy leakage from the dataset. To investigate the potential
privacy leakage from the skeleton datasets, we first train a classifier to
categorize sensitive private information from a trajectory of joints.
Experiments show the model trained to classify gender can predict with 88%
accuracy and re-identify a person with 82% accuracy. We propose two variants of
anonymization algorithms to protect the potential privacy leakage from the
skeleton dataset. Experimental results show that the anonymized dataset can
reduce the risk of privacy leakage while having marginal effects on the action
recognition performance.
Related papers
- Effect of Data Degradation on Motion Re-Identification [16.062009131216467]
We study the effect of signal degradation on identifiability, specifically through added noise, reduced framerate, reduced precision, and reduced dimensionality of the data.
Our experiment shows that state-of-the-art identification attacks still achieve near-perfect accuracy for each of these degradations.
arXiv Detail & Related papers (2024-07-25T20:23:32Z) - Distillation-guided Representation Learning for Unconstrained Gait Recognition [50.0533243584942]
We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios.
GADER builds discriminative features through a novel gait recognition method, where only frames containing gait information are used.
We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets.
arXiv Detail & Related papers (2023-07-27T01:53:57Z) - dugMatting: Decomposed-Uncertainty-Guided Matting [83.71273621169404]
We propose a decomposed-uncertainty-guided matting algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results.
The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling.
arXiv Detail & Related papers (2023-06-02T11:19:50Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - Privacy-preserving Object Detection [52.77024349608834]
We show that for object detection on COCO, both anonymizing the dataset by blurring faces, as well as swapping faces in a balanced manner along the gender and skin tone dimension, can retain object detection performances while preserving privacy and partially balancing bias.
arXiv Detail & Related papers (2021-03-11T10:34:54Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - How important are faces for person re-identification? [14.718372669984364]
We apply a face detection and blurring algorithm to create anonymized versions of several popular person re-identification datasets.
We evaluate the effect of this anonymization on re-identification performance using standard metrics.
arXiv Detail & Related papers (2020-10-13T11:47:16Z) - Attribute Privacy: Framework and Mechanisms [26.233612860653025]
We study the study of attribute privacy, where a data owner is concerned about revealing sensitive properties of a whole dataset during analysis.
We propose definitions to capture emphattribute privacy in two relevant cases where global attributes may need to be protected.
We provide two efficient mechanisms and one inefficient mechanism that satisfy attribute privacy for these settings.
arXiv Detail & Related papers (2020-09-08T22:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.