DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep
neural networks
- URL: http://arxiv.org/abs/2012.11025v1
- Date: Sun, 20 Dec 2020 21:15:13 GMT
- Title: DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep
neural networks
- Authors: Abhishek Singh, Ayush Chopra, Vivek Sharma, Ethan Garza, Emily Zhang,
Praneeth Vepakomma, Ramesh Raskar
- Abstract summary: We propose DISCO which learns a dynamic and data driven pruning filter to selectively obfuscate sensitive information in the feature space.
We also release an evaluation benchmark dataset of 1 million sensitive representations to encourage rigorous exploration of novel attack schemes.
- Score: 19.307753802569156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep learning models have shown remarkable performance in image
classification. While these deep learning systems are getting closer to
practical deployment, the common assumption made about data is that it does not
carry any sensitive information. This assumption may not hold for many
practical cases, especially in the domain where an individual's personal
information is involved, like healthcare and facial recognition systems. We
posit that selectively removing features in this latent space can protect the
sensitive information and provide a better privacy-utility trade-off.
Consequently, we propose DISCO which learns a dynamic and data driven pruning
filter to selectively obfuscate sensitive information in the feature space. We
propose diverse attack schemes for sensitive inputs \& attributes and
demonstrate the effectiveness of DISCO against state-of-the-art methods through
quantitative and qualitative evaluation. Finally, we also release an evaluation
benchmark dataset of 1 million sensitive representations to encourage rigorous
exploration of novel attack schemes.
Related papers
- MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective [10.009178591853058]
We propose a formal information-theoretic definition for this utility-preserving privacy protection problem.
We design a data-driven learnable data transformation framework that is capable of suppressing sensitive attributes from target datasets.
Results demonstrate the effectiveness and generalizability of our method under various configurations.
arXiv Detail & Related papers (2024-05-23T18:35:46Z) - Leveraging Internal Representations of Model for Magnetic Image
Classification [0.13654846342364302]
This paper introduces a potentially groundbreaking paradigm for machine learning model training, specifically designed for scenarios with only a single magnetic image and its corresponding label image available.
We harness the capabilities of Deep Learning to generate concise yet informative samples, aiming to overcome data scarcity.
arXiv Detail & Related papers (2024-03-11T15:15:50Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - Learning to Censor by Noisy Sampling [17.06138741660826]
This work is to protect sensitive information when learning from point clouds.
We focus on preserving utility for perception tasks while mitigating attribute leakage attacks.
The key motivating insight is to leverage the localized saliency of perception tasks on point clouds to provide good privacy-utility trade-offs.
arXiv Detail & Related papers (2022-03-23T04:50:50Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Capturing scattered discriminative information using a deep architecture
in acoustic scene classification [49.86640645460706]
In this study, we investigate various methods to capture discriminative information and simultaneously mitigate the overfitting problem.
We adopt a max feature map method to replace conventional non-linear activations in a deep neural network.
Two data augment methods and two deep architecture modules are further explored to reduce overfitting and sustain the system's discriminative power.
arXiv Detail & Related papers (2020-07-09T08:32:06Z) - Learning Cross-domain Generalizable Features by Representation
Disentanglement [11.74643883335152]
Deep learning models exhibit limited generalizability across different domains.
We propose Mutual-Information-based Disentangled Neural Networks (MIDNet) to extract generalizable features that enable transferring knowledge to unseen categorical features in target domains.
We demonstrate our method on handwritten digits datasets and a fetal ultrasound dataset for image classification tasks.
arXiv Detail & Related papers (2020-02-29T17:53:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.