Explaining models relating objects and privacy
- URL: http://arxiv.org/abs/2405.01646v1
- Date: Thu, 2 May 2024 18:06:48 GMT
- Title: Explaining models relating objects and privacy
- Authors: Alessio Xompero, Myriam Bontonou, Jean-Michel Arbona, Emmanouil Benetos, Andrea Cavallaro,
- Abstract summary: We evaluate privacy models that use objects extracted from an image to determine why the image is predicted as private.
We show that the presence of the person category and its cardinality is the main factor for the privacy decision.
- Score: 33.78605193864911
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurately predicting whether an image is private before sharing it online is difficult due to the vast variety of content and the subjective nature of privacy itself. In this paper, we evaluate privacy models that use objects extracted from an image to determine why the image is predicted as private. To explain the decision of these models, we use feature-attribution to identify and quantify which objects (and which of their features) are more relevant to privacy classification with respect to a reference input (i.e., no objects localised in an image) predicted as public. We show that the presence of the person category and its cardinality is the main factor for the privacy decision. Therefore, these models mostly fail to identify private images depicting documents with sensitive data, vehicle ownership, and internet activity, or public images with people (e.g., an outdoor concert or people walking in a public space next to a famous landmark). As baselines for future benchmarks, we also devise two strategies that are based on the person presence and cardinality and achieve comparable classification performance of the privacy models.
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Enhancing User-Centric Privacy Protection: An Interactive Framework through Diffusion Models and Machine Unlearning [54.30994558765057]
The study pioneers a comprehensive privacy protection framework that safeguards image data privacy concurrently during data sharing and model publication.
We propose an interactive image privacy protection framework that utilizes generative machine learning models to modify image information at the attribute level.
Within this framework, we instantiate two modules: a differential privacy diffusion model for protecting attribute information in images and a feature unlearning algorithm for efficient updates of the trained model on the revised image dataset.
arXiv Detail & Related papers (2024-09-05T07:55:55Z) - Private Attribute Inference from Images with Vision-Language Models [2.9373912230684565]
Vision-language models (VLMs) are capable of understanding both images and text.
We evaluate 7 state-of-the-art VLMs, finding that they can infer various personal attributes at up to 77.6% accuracy.
We observe that accuracy scales with the general capabilities of the models, implying that future models can be misused as stronger inferential adversaries.
arXiv Detail & Related papers (2024-04-16T14:42:49Z) - SHAN: Object-Level Privacy Detection via Inference on Scene Heterogeneous Graph [5.050631286347773]
Privacy object detection aims to accurately locate private objects in images.
Existing methods suffer from serious deficiencies in accuracy, generalization, and interpretability.
We propose SHAN, Scene Heterogeneous graph Attention Network, a model constructs a scene heterogeneous graph from an image.
arXiv Detail & Related papers (2024-03-14T08:32:14Z) - Only My Model On My Data: A Privacy Preserving Approach Protecting one
Model and Deceiving Unauthorized Black-Box Models [11.59117790048892]
This study tackles an unexplored practical privacy preservation use case by generating human-perceivable images that maintain accurate inference by an authorized model.
Our results show that the generated images can successfully maintain the accuracy of a protected model and degrade the average accuracy of the unauthorized black-box models to 11.97%, 6.63%, and 55.51% on ImageNet, Celeba-HQ, and AffectNet datasets, respectively.
arXiv Detail & Related papers (2024-02-14T17:11:52Z) - Human-interpretable and deep features for image privacy classification [32.253391125106674]
We discuss suitable features for image privacy classification and propose eight privacy-specific and human-interpretable features.
These features increase the performance of deep learning models and, on their own, improve the image representation for privacy classification compared with much higher dimensional deep features.
arXiv Detail & Related papers (2023-10-30T14:39:43Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Content-based Graph Privacy Advisor [38.733077459065704]
We present an image privacy classifier that uses scene information and object cardinality as cues for the prediction of image privacy.
Our Graph Privacy Advisor (GPA) model simplifies a state-of-the-art graph model and improves its performance.
arXiv Detail & Related papers (2022-10-20T11:12:42Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.