Privacy-Preserving Feature Coding for Machines
- URL: http://arxiv.org/abs/2210.00727v1
- Date: Mon, 3 Oct 2022 06:13:43 GMT
- Title: Privacy-Preserving Feature Coding for Machines
- Authors: Bardia Azizian and Ivan V. Baji\'c
- Abstract summary: Automated machine vision pipelines do not need the exact visual content to perform their tasks.
We present a novel method to create a privacy-preserving latent representation of an image.
- Score: 32.057586389777185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated machine vision pipelines do not need the exact visual content to
perform their tasks. Therefore, there is a potential to remove private
information from the data without significantly affecting the machine vision
accuracy. We present a novel method to create a privacy-preserving latent
representation of an image that could be used by a downstream machine vision
model. This latent representation is constructed using adversarial training to
prevent accurate reconstruction of the input while preserving the task
accuracy. Specifically, we split a Deep Neural Network (DNN) model and insert
an autoencoder whose purpose is to both reduce the dimensionality as well as
remove information relevant to input reconstruction while minimizing the impact
on task accuracy. Our results show that input reconstruction ability can be
reduced by about 0.8 dB at the equivalent task accuracy, with degradation
concentrated near the edges, which is important for privacy. At the same time,
30% bit savings are achieved compared to coding the features directly.
Related papers
- You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks [29.03438707988713]
Existing privacy protection techniques are unable to efficiently protect such data.
We propose a novel privacy-preserving framework VisualMixer.
VisualMixer shuffles pixels in the spatial domain and in the chromatic channel space in the regions without injecting noises.
Experiments on real-world datasets demonstrate that VisualMixer can effectively preserve the visual privacy with negligible accuracy loss.
arXiv Detail & Related papers (2024-04-05T13:49:27Z) - Impact of Disentanglement on Pruning Neural Networks [16.077795265753917]
Disentangled latent representations produced by variational autoencoder (VAE) networks are a promising approach for achieving model compression.
We make use of the Beta-VAE framework combined with a standard criterion for pruning to investigate the impact of forcing the network to learn disentangled representations.
arXiv Detail & Related papers (2023-07-19T13:58:01Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - MAPLE: Masked Pseudo-Labeling autoEncoder for Semi-supervised Point
Cloud Action Recognition [160.49403075559158]
We propose a Masked Pseudo-Labeling autoEncoder (textbfMAPLE) framework for point cloud action recognition.
In particular, we design a novel and efficient textbfDecoupled textbfspatial-textbftemporal TranstextbfFormer (textbfDestFormer) as the backbone of MAPLE.
MAPLE achieves superior results on three public benchmarks and outperforms the state-of-the-art method by 8.08% accuracy on the MSR-Action3
arXiv Detail & Related papers (2022-09-01T12:32:40Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Self-Compression in Bayesian Neural Networks [0.9176056742068814]
We propose a new insight into network compression through the Bayesian framework.
We show that Bayesian neural networks automatically discover redundancy in model parameters, thus enabling self-compression.
Our experimental results show that the network architecture can be successfully compressed by deleting parameters identified by the network itself.
arXiv Detail & Related papers (2021-11-10T21:19:40Z) - Towards a Safety Case for Hardware Fault Tolerance in Convolutional
Neural Networks Using Activation Range Supervision [1.7968112116887602]
Convolutional neural networks (CNNs) have become an established part of numerous safety-critical computer vision applications.
We build a prototypical safety case for CNNs by demonstrating that range supervision represents a highly reliable fault detector.
We explore novel, non-uniform range restriction methods that effectively suppress the probability of silent data corruptions and uncorrectable errors.
arXiv Detail & Related papers (2021-08-16T11:13:55Z) - PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View
Depth Estimation with Neural Positional Encoding and Distilled Matting Loss [49.66736599668501]
We propose a self-supervised single-view pixel-level accurate depth estimation network, called PLADE-Net.
Our method shows unprecedented accuracy levels, exceeding 95% in terms of the $delta1$ metric on the KITTI dataset.
arXiv Detail & Related papers (2021-03-12T15:54:46Z) - Mixed-Privacy Forgetting in Deep Networks [114.3840147070712]
We show that the influence of a subset of the training samples can be removed from the weights of a network trained on large-scale image classification tasks.
Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting.
We show that our method allows forgetting without having to trade off the model accuracy.
arXiv Detail & Related papers (2020-12-24T19:34:56Z) - Circumventing Outliers of AutoAugment with Knowledge Distillation [102.25991455094832]
AutoAugment has been a powerful algorithm that improves the accuracy of many vision tasks.
This paper delves deep into the working mechanism, and reveals that AutoAugment may remove part of discriminative information from the training image.
To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training.
arXiv Detail & Related papers (2020-03-25T11:51:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.