On the Importance of Encrypting Deep Features
- URL: http://arxiv.org/abs/2108.07147v1
- Date: Mon, 16 Aug 2021 15:22:33 GMT
- Title: On the Importance of Encrypting Deep Features
- Authors: Xingyang Ni, Heikki Huttunen, Esa Rahtu
- Abstract summary: We analyze model inversion attacks with only two assumptions: feature vectors of user data are known, and a black-box API for inference is provided.
Experiments have been conducted on state-of-the-art models in person re-identification, and two attack scenarios (i.e., recognizing auxiliary attributes and reconstructing user data) are investigated.
Results show that an adversary could successfully infer sensitive information even under severe constraints.
- Score: 15.340540198612823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we analyze model inversion attacks with only two assumptions:
feature vectors of user data are known, and a black-box API for inference is
provided. On the one hand, limitations of existing studies are addressed by
opting for a more practical setting. Experiments have been conducted on
state-of-the-art models in person re-identification, and two attack scenarios
(i.e., recognizing auxiliary attributes and reconstructing user data) are
investigated. Results show that an adversary could successfully infer sensitive
information even under severe constraints. On the other hand, it is advisable
to encrypt feature vectors, especially for a machine learning model in
production. As an alternative to traditional encryption methods such as AES, a
simple yet effective method termed ShuffleBits is presented. More specifically,
the binary sequence of each floating-point number gets shuffled. Deployed using
the one-time pad scheme, it serves as a plug-and-play module that is applicable
to any neural network, and the resulting model directly outputs deep features
in encrypted form. Source code is publicly available at
https://github.com/nixingyang/ShuffleBits.
Related papers
- FLUE: Federated Learning with Un-Encrypted model weights [0.0]
Federated learning enables devices to collaboratively train a shared model while keeping training data locally stored.
Recent research emphasizes using encrypted model parameters during training.
This paper introduces a novel federated learning algorithm, leveraging coded local gradients without encryption.
arXiv Detail & Related papers (2024-07-26T14:04:57Z) - Fact Checking Beyond Training Set [64.88575826304024]
We show that the retriever-reader suffers from performance deterioration when it is trained on labeled data from one domain and used in another domain.
We propose an adversarial algorithm to make the retriever component robust against distribution shift.
We then construct eight fact checking scenarios from these datasets, and compare our model to a set of strong baseline models.
arXiv Detail & Related papers (2024-03-27T15:15:14Z) - Memorization for Good: Encryption with Autoregressive Language Models [8.645826579841692]
We propose the first symmetric encryption algorithm with autoregressive language models (SELM)
We show that autoregressive LMs can encode arbitrary data into a compact real-valued vector (i.e., encryption) and then losslessly decode the vector to the original message (i.e. decryption) via random subspace optimization and greedy decoding.
arXiv Detail & Related papers (2023-05-15T05:42:34Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Instance Attack:An Explanation-based Vulnerability Analysis Framework
Against DNNs for Malware Detection [0.0]
We propose the notion of the instance-based attack.
Our scheme is interpretable and can work in a black-box environment.
Our method operates in black-box settings and the results can be validated with domain knowledge.
arXiv Detail & Related papers (2022-09-06T12:41:20Z) - Syfer: Neural Obfuscation for Private Data Release [58.490998583666276]
We develop Syfer, a neural obfuscation method to protect against re-identification attacks.
Syfer composes trained layers with random neural networks to encode the original data.
It maintains the ability to predict diagnoses from the encoded data.
arXiv Detail & Related papers (2022-01-28T20:32:04Z) - Is Private Learning Possible with Instance Encoding? [68.84324434746765]
We study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism.
We formalize both the notion of instance encoding and its privacy by providing two attack models.
arXiv Detail & Related papers (2020-11-10T18:55:20Z) - Learning One Class Representations for Face Presentation Attack
Detection using Multi-channel Convolutional Neural Networks [7.665392786787577]
presentation attack detection (PAD) methods often fail in generalizing to unseen attacks.
We propose a new framework for PAD using a one-class classifier, where the representation used is learned with a Multi-Channel Convolutional Neural Network (MCCNN)
A novel loss function is introduced, which forces the network to learn a compact embedding for bonafide class while being far from the representation of attacks.
The proposed framework introduces a novel approach to learn a robust PAD system from bonafide and available (known) attack classes.
arXiv Detail & Related papers (2020-07-22T14:19:33Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Cryptotree: fast and accurate predictions on encrypted structured data [0.0]
Homomorphic Encryption (HE) is acknowledged for its ability to allow computation on encrypted data, where both the input and output are encrypted.
We propose Cryptotree, a framework that enables the use of Random Forests (RF), a very powerful learning procedure compared to linear regression.
arXiv Detail & Related papers (2020-06-15T11:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.