SocialGuard: An Adversarial Example Based Privacy-Preserving Technique
for Social Images
- URL: http://arxiv.org/abs/2011.13560v1
- Date: Fri, 27 Nov 2020 05:12:47 GMT
- Title: SocialGuard: An Adversarial Example Based Privacy-Preserving Technique
for Social Images
- Authors: Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu
- Abstract summary: We propose a novel adversarial example based privacy-preserving technique for social images against object detectors based privacy stealing.
We use two metrics, privacy-preserving success rate and privacy leakage rate, to evaluate the effectiveness of the proposed method.
The privacy-preserving success rates of the proposed method on MS-COCO and PASCAL VOC 2007 datasets are high up to 96.1% and 99.3%, respectively.
- Score: 6.321399006735314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The popularity of various social platforms has prompted more people to share
their routine photos online. However, undesirable privacy leakages occur due to
such online photo sharing behaviors. Advanced deep neural network (DNN) based
object detectors can easily steal users' personal information exposed in shared
photos. In this paper, we propose a novel adversarial example based
privacy-preserving technique for social images against object detectors based
privacy stealing. Specifically, we develop an Object Disappearance Algorithm to
craft two kinds of adversarial social images. One can hide all objects in the
social images from being detected by an object detector, and the other can make
the customized sensitive objects be incorrectly classified by the object
detector. The Object Disappearance Algorithm constructs perturbation on a clean
social image. After being injected with the perturbation, the social image can
easily fool the object detector, while its visual quality will not be degraded.
We use two metrics, privacy-preserving success rate and privacy leakage rate,
to evaluate the effectiveness of the proposed method. Experimental results show
that, the proposed method can effectively protect the privacy of social images.
The privacy-preserving success rates of the proposed method on MS-COCO and
PASCAL VOC 2007 datasets are high up to 96.1% and 99.3%, respectively, and the
privacy leakage rates on these two datasets are as low as 0.57% and 0.07%,
respectively. In addition, compared with existing image processing methods (low
brightness, noise, blur, mosaic and JPEG compression), the proposed method can
achieve much better performance in privacy protection and image visual quality
maintenance.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - SHAN: Object-Level Privacy Detection via Inference on Scene Heterogeneous Graph [5.050631286347773]
Privacy object detection aims to accurately locate private objects in images.
Existing methods suffer from serious deficiencies in accuracy, generalization, and interpretability.
We propose SHAN, Scene Heterogeneous graph Attention Network, a model constructs a scene heterogeneous graph from an image.
arXiv Detail & Related papers (2024-03-14T08:32:14Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Privacy-Preserving Face Recognition with Learnable Privacy Budgets in
Frequency Domain [77.8858706250075]
This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain.
Our method performs very well with several classical face recognition test sets.
arXiv Detail & Related papers (2022-07-15T07:15:36Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Differentially Private Imaging via Latent Space Manipulation [5.446368808660037]
We present a novel approach for image obfuscation by manipulating latent spaces of an unconditionally trained generative model.
This is the first approach to image privacy that satisfies $varepsilon$-differential privacy emphfor the person.
arXiv Detail & Related papers (2021-03-08T17:32:08Z) - FoggySight: A Scheme for Facial Lookup Privacy [8.19666118455293]
We propose and evaluate a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media.
F FoggySight's core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms.
We explore different settings for this scheme and find that it does enable protection of facial privacy -- including against a facial recognition service with unknown internals.
arXiv Detail & Related papers (2020-12-15T19:57:18Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.