A Review on Visual Privacy Preservation Techniques for Active and
Assisted Living
- URL: http://arxiv.org/abs/2112.09422v1
- Date: Fri, 17 Dec 2021 10:37:30 GMT
- Title: A Review on Visual Privacy Preservation Techniques for Active and
Assisted Living
- Authors: Siddharth Ravi, Pau Climent-P\'erez, Francisco Florez-Revuelta
- Abstract summary: A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced.
Perceptual obfuscation methods, a category in the taxonomy, is highlighted.
Obfuscation against machine learning models is also explored.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper reviews the state of the art in visual privacy protection
techniques, with particular attention paid to techniques applicable to the
field of active and assisted living (AAL). A novel taxonomy with which
state-of-the-art visual privacy protection methods can be classified is
introduced. Perceptual obfuscation methods, a category in the taxonomy, is
highlighted. These are a category of visual privacy preservation techniques
particularly relevant when considering scenarios that come under video-based
AAL monitoring. Obfuscation against machine learning models is also explored. A
high-level classification scheme of the different levels of privacy by design
is connected to the proposed taxonomy of visual privacy preservation
techniques. Finally, we note open questions that exist in the field and
introduce the reader to some exciting avenues for future research in the area
of visual privacy.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - PrivacyProber: Assessment and Detection of Soft-Biometric
Privacy-Enhancing Techniques [1.790445868185437]
We study the robustness of several state-of-the-art soft-biometric privacy-enhancing techniques to attribute recovery attempts.
We propose PrivacyProber, a high-level framework for restoring soft-biometric information from privacy-enhanced facial images.
arXiv Detail & Related papers (2022-11-16T12:20:18Z) - Hiding Visual Information via Obfuscating Adversarial Perturbations [47.315523613407244]
We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
arXiv Detail & Related papers (2022-09-30T08:23:26Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Toward Privacy and Utility Preserving Image Representation [26.768476643200664]
We study the novel problem of creating privacy-preserving image representations with respect to a given utility task.
We propose a principled framework called the Adversarial Image Anonymizer (AIA)
AIA first creates an image representation using a generative model, then enhances the learned image representations using adversarial learning to preserve privacy and utility for a given task.
arXiv Detail & Related papers (2020-09-30T01:25:00Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.