On the Design of Privacy-Aware Cameras: a Study on Deep Neural Networks
- URL: http://arxiv.org/abs/2208.11372v1
- Date: Wed, 24 Aug 2022 08:45:31 GMT
- Title: On the Design of Privacy-Aware Cameras: a Study on Deep Neural Networks
- Authors: Marcela Carvalho, Oussama Ennaffi, Sylvain Chateau, Samy Ait Bachir
- Abstract summary: This paper studies the effect of camera distortions on data protection.
We build a privacy-aware camera that cannot extract personal information such as license plate numbers.
At the same time, we ensure that useful non-sensitive data can still be extracted from distorted images.
- Score: 0.7646713951724011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In spite of the legal advances in personal data protection, the issue of
private data being misused by unauthorized entities is still of utmost
importance. To prevent this, Privacy by Design is often proposed as a solution
for data protection. In this paper, the effect of camera distortions is studied
using Deep Learning techniques commonly used to extract sensitive data. To do
so, we simulate out-of-focus images corresponding to a realistic conventional
camera with fixed focal length, aperture, and focus, as well as grayscale
images coming from a monochrome camera. We then prove, through an experimental
study, that we can build a privacy-aware camera that cannot extract personal
information such as license plate numbers. At the same time, we ensure that
useful non-sensitive data can still be extracted from distorted images. Code is
available at https://github.com/upciti/privacy-by-design-semseg .
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks [29.03438707988713]
Existing privacy protection techniques are unable to efficiently protect such data.
We propose a novel privacy-preserving framework VisualMixer.
VisualMixer shuffles pixels in the spatial domain and in the chromatic channel space in the regions without injecting noises.
Experiments on real-world datasets demonstrate that VisualMixer can effectively preserve the visual privacy with negligible accuracy loss.
arXiv Detail & Related papers (2024-04-05T13:49:27Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Human-Imperceptible Identification with Learnable Lensless Imaging [12.571999330435801]
We propose a learnable lensless imaging framework that protects visual privacy while maintaining recognition accuracy.
To make captured images imperceptible to humans, we designed several loss functions based on total variation, invertibility, and the restricted isometry property.
arXiv Detail & Related papers (2023-02-04T22:58:46Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Selective manipulation of disentangled representations for privacy-aware
facial image processing [5.612561387428165]
We propose an edge-based filtering stage that removes privacy-sensitive attributes before the sensor data are transmitted to the cloud.
We use state-of-the-art image manipulation techniques that leverage disentangled representations to achieve privacy filtering.
arXiv Detail & Related papers (2022-08-26T12:47:18Z) - Privacy Enhancement for Cloud-Based Few-Shot Learning [4.1579007112499315]
We study the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud.
We propose a method that learns privacy-preserved representation through the joint loss.
The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.
arXiv Detail & Related papers (2022-05-10T18:48:13Z) - Deep Learning Approach Protecting Privacy in Camera-Based Critical
Applications [57.93313928219855]
We propose a deep learning approach towards protecting privacy in camera-based systems.
Our technique distinguishes between salient (visually prominent) and non-salient objects based on the intuition that the latter is unlikely to be needed by the application.
arXiv Detail & Related papers (2021-10-04T19:16:27Z) - Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized
Neural Model Training [50.308254937851814]
Personal data (e.g. images) could be exploited inappropriately to train deep neural network models without authorization.
By embedding a watermarking signature using specialized linear color transformation to user images, neural models will be imprinted with such a signature.
This is the first work to protect users' personal data from unauthorized usage in neural network training.
arXiv Detail & Related papers (2021-09-18T22:10:37Z) - Modeling Deep Learning Based Privacy Attacks on Physical Mail [89.3344470606211]
Mail privacy protection aims to prevent unauthorized access to hidden content within an envelope.
We show that with a well designed deep learning model, the hidden content may be largely recovered without opening the envelope.
arXiv Detail & Related papers (2020-12-22T02:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.