Human-interpretable and deep features for image privacy classification
- URL: http://arxiv.org/abs/2310.19582v2
- Date: Tue, 31 Oct 2023 10:44:15 GMT
- Title: Human-interpretable and deep features for image privacy classification
- Authors: Darya Baranouskaya and Andrea Cavallaro
- Abstract summary: We discuss suitable features for image privacy classification and propose eight privacy-specific and human-interpretable features.
These features increase the performance of deep learning models and, on their own, improve the image representation for privacy classification compared with much higher dimensional deep features.
- Score: 32.253391125106674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy is a complex, subjective and contextual concept that is difficult to
define. Therefore, the annotation of images to train privacy classifiers is a
challenging task. In this paper, we analyse privacy classification datasets and
the properties of controversial images that are annotated with contrasting
privacy labels by different assessors. We discuss suitable features for image
privacy classification and propose eight privacy-specific and
human-interpretable features. These features increase the performance of deep
learning models and, on their own, improve the image representation for privacy
classification compared with much higher dimensional deep features.
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Image-guided topic modeling for interpretable privacy classification [27.301741710016223]
We propose to predict image privacy based on a set of natural language content descriptors.
These content descriptors are associated with privacy scores that reflect how people perceive image content.
We use the ITM-generated descriptors to learn a privacy predictor, Priv$times$ITM, whose decisions are interpretable by design.
arXiv Detail & Related papers (2024-09-27T12:02:28Z) - Explaining models relating objects and privacy [33.78605193864911]
We evaluate privacy models that use objects extracted from an image to determine why the image is predicted as private.
We show that the presence of the person category and its cardinality is the main factor for the privacy decision.
arXiv Detail & Related papers (2024-05-02T18:06:48Z) - SHAN: Object-Level Privacy Detection via Inference on Scene Heterogeneous Graph [5.050631286347773]
Privacy object detection aims to accurately locate private objects in images.
Existing methods suffer from serious deficiencies in accuracy, generalization, and interpretability.
We propose SHAN, Scene Heterogeneous graph Attention Network, a model constructs a scene heterogeneous graph from an image.
arXiv Detail & Related papers (2024-03-14T08:32:14Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Fairly Private: Investigating The Fairness of Visual Privacy
Preservation Algorithms [1.5293427903448025]
This paper investigates the fairness of commonly used visual privacy preservation algorithms.
Experiments on the PubFig dataset clearly show that the privacy protection provided is unequal across groups.
arXiv Detail & Related papers (2023-01-12T13:40:38Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Content-based Graph Privacy Advisor [38.733077459065704]
We present an image privacy classifier that uses scene information and object cardinality as cues for the prediction of image privacy.
Our Graph Privacy Advisor (GPA) model simplifies a state-of-the-art graph model and improves its performance.
arXiv Detail & Related papers (2022-10-20T11:12:42Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.