Modeling Deep Learning Based Privacy Attacks on Physical Mail
- URL: http://arxiv.org/abs/2012.11803v2
- Date: Thu, 25 Mar 2021 21:02:54 GMT
- Title: Modeling Deep Learning Based Privacy Attacks on Physical Mail
- Authors: Bingyao Huang and Ruyi Lian and Dimitris Samaras and Haibin Ling
- Abstract summary: Mail privacy protection aims to prevent unauthorized access to hidden content within an envelope.
We show that with a well designed deep learning model, the hidden content may be largely recovered without opening the envelope.
- Score: 89.3344470606211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mail privacy protection aims to prevent unauthorized access to hidden content
within an envelope since normal paper envelopes are not as safe as we think. In
this paper, for the first time, we show that with a well designed deep learning
model, the hidden content may be largely recovered without opening the
envelope. We start by modeling deep learning-based privacy attacks on physical
mail content as learning the mapping from the camera-captured envelope front
face image to the hidden content, then we explicitly model the mapping as a
combination of perspective transformation, image dehazing and denoising using a
deep convolutional neural network, named Neural-STE (See-Through-Envelope). We
show experimentally that hidden content details, such as texture and image
structure, can be clearly recovered. Finally, our formulation and model allow
us to design envelopes that can counter deep learning-based privacy attacks on
physical mail.
Related papers
- PriPHiT: Privacy-Preserving Hierarchical Training of Deep Neural Networks [44.0097014096626]
We propose a method to perform the training phase of a deep learning model on both an edge device and a cloud server.
The proposed privacy-preserving method uses adversarial early exits to suppress the sensitive content at the edge and transmits the task-relevant information to the cloud.
arXiv Detail & Related papers (2024-08-09T14:33:34Z) - LDP-Feat: Image Features with Local Differential Privacy [10.306943706927006]
We propose two novel inversion attacks to show that it is possible to recover the original image features from embeddings.
We propose the first method to privatize image features via local differential privacy, which, unlike prior approaches, provides a guaranteed bound for privacy leakage.
arXiv Detail & Related papers (2023-08-22T06:28:55Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning [14.505867475659276]
We propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning.
We use two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one.
Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
arXiv Detail & Related papers (2023-03-09T05:00:17Z) - On the Design of Privacy-Aware Cameras: a Study on Deep Neural Networks [0.7646713951724011]
This paper studies the effect of camera distortions on data protection.
We build a privacy-aware camera that cannot extract personal information such as license plate numbers.
At the same time, we ensure that useful non-sensitive data can still be extracted from distorted images.
arXiv Detail & Related papers (2022-08-24T08:45:31Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.