Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional
Groups
- URL: http://arxiv.org/abs/2002.01469v1
- Date: Tue, 4 Feb 2020 18:54:52 GMT
- Title: Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional
Groups
- Authors: Sohrab Ferdowsi, Behrooz Razeghi, Taras Holotyak, Flavio P. Calmon,
Slava Voloshynovskiy
- Abstract summary: We propose a practical framework to address the problem of privacy-aware image sharing in large-scale setups.
We encode images, such that, from one hand, representations are stored in the public domain without paying the huge cost of privacy protection.
authorized users are provided with very compact keys that can easily be kept secure.
This can be used to disambiguate and faithfully reconstruct the corresponding access-granted images.
- Score: 11.955557264002204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a practical framework to address the problem of privacy-aware
image sharing in large-scale setups. We argue that, while compactness is always
desired at scale, this need is more severe when trying to furthermore protect
the privacy-sensitive content. We therefore encode images, such that, from one
hand, representations are stored in the public domain without paying the huge
cost of privacy protection, but ambiguated and hence leaking no discernible
content from the images, unless a combinatorially-expensive guessing mechanism
is available for the attacker. From the other hand, authorized users are
provided with very compact keys that can easily be kept secure. This can be
used to disambiguate and reconstruct faithfully the corresponding
access-granted images. We achieve this with a convolutional autoencoder of our
design, where feature maps are passed independently through sparsifying
transformations, providing multiple compact codes, each responsible for
reconstructing different attributes of the image. The framework is tested on a
large-scale database of images with public implementation available.
Related papers
- Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models [62.979954692036685]
We introduce PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring and semantic prompt search.
Our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
arXiv Detail & Related papers (2025-04-25T02:51:23Z) - GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors [8.261182037130407]
GuardDoor is a novel and robust protection mechanism that fosters collaboration between image owners and model providers.
Our method demonstrates enhanced robustness against image preprocessing operations and is scalable for large-scale deployment.
arXiv Detail & Related papers (2025-03-05T22:21:44Z) - Catch You Everything Everywhere: Guarding Textual Inversion via Concept Watermarking [67.60174799881597]
We propose the novel concept watermarking, where watermark information is embedded into the target concept and then extracted from generated images based on the watermarked concept.
In practice, the concept owner can upload his concept with different watermarks (ie, serial numbers) to the platform, and the platform allocates different users with different serial numbers for subsequent tracing and forensics.
arXiv Detail & Related papers (2023-09-12T03:33:13Z) - LDP-Feat: Image Features with Local Differential Privacy [10.306943706927006]
We propose two novel inversion attacks to show that it is possible to recover the original image features from embeddings.
We propose the first method to privatize image features via local differential privacy, which, unlike prior approaches, provides a guaranteed bound for privacy leakage.
arXiv Detail & Related papers (2023-08-22T06:28:55Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Privacy-Preserving Image Classification Using ConvMixer with Adaptive
Permutation Matrix [13.890279045382623]
We propose a privacy-preserving image classification method using encrypted images under the use of the ConvMixer structure.
Images with a large size cannot be applied to the conventional method with an adaptation network.
We propose a novel method, which allows us not only to apply block-wise scrambled images to ConvMixer for both training and testing without the adaptation network.
arXiv Detail & Related papers (2022-08-04T09:55:31Z) - Privacy Preserving Image Registration [4.709526996577762]
We formulate the problem of image registration under a privacy preserving regime, where images are assumed to be confidential and cannot be disclosed in clear.
We extend classical registration paradigms to account for advanced cryptographic tools, such as secure multi-party computation and homomorphic encryption.
Our results show that privacy preserving image registration is feasible and can be adopted in sensitive medical imaging applications.
arXiv Detail & Related papers (2022-05-17T14:00:58Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.