Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted
Inference
- URL: http://arxiv.org/abs/2402.01296v1
- Date: Fri, 2 Feb 2024 10:35:05 GMT
- Title: Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted
Inference
- Authors: Man-Jie Yuan, Zheng Zou, Wei Gao
- Abstract summary: We decompose the input data into sensitive and insensitive segments according to importance and privacy.
The sensitive segment includes some important and private information such as human faces.
We take strong homomorphic encryption to keep security, whereas the insensitive one contains some background and we add perturbations.
- Score: 4.754973569457509
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy-preserving neural networks have attracted increasing attention in
recent years, and various algorithms have been developed to keep the balance
between accuracy, computational complexity and information security from the
cryptographic view. This work takes a different view from the input data and
structure of neural networks. We decompose the input data (e.g., some images)
into sensitive and insensitive segments according to importance and privacy.
The sensitive segment includes some important and private information such as
human faces and we take strong homomorphic encryption to keep security, whereas
the insensitive one contains some background and we add perturbations. We
propose the bi-CryptoNets, i.e., plaintext and ciphertext branches, to deal
with two segments, respectively, and ciphertext branch could utilize the
information from plaintext branch by unidirectional connections. We adopt
knowledge distillation for our bi-CryptoNets by transferring representations
from a well-trained teacher neural network. Empirical studies show the
effectiveness and decrease of inference latency for our bi-CryptoNets.
Related papers
- Cryptanalysis via Machine Learning Based Information Theoretic Metrics [58.96805474751668]
We propose two novel applications of machine learning (ML) algorithms to perform cryptanalysis on any cryptosystem.
These algorithms can be readily applied in an audit setting to evaluate the robustness of a cryptosystem.
We show that our classification model correctly identifies the encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB, with high accuracy.
arXiv Detail & Related papers (2025-01-25T04:53:36Z) - Secure Semantic Communication With Homomorphic Encryption [52.5344514499035]
This paper explores the feasibility of applying homomorphic encryption to SemCom.
We propose a task-oriented SemCom scheme secured through homomorphic encryption.
arXiv Detail & Related papers (2025-01-17T13:26:14Z) - Neural Networks Meet Elliptic Curve Cryptography: A Novel Approach to Secure Communication [0.8399688944263844]
The proposed approach explores the application of asymmetric cryptography within a neural network framework.
It employs a set of five distinct cryptographic keys to examine the efficacy and robustness of communication security against eavesdropping attempts.
arXiv Detail & Related papers (2024-07-11T19:34:16Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Deep Neural Networks for Encrypted Inference with TFHE [0.0]
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption.
TFHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information.
We show how to construct Deep Neural Networks (DNNs) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits.
arXiv Detail & Related papers (2023-02-13T09:53:31Z) - Application of Data Encryption in Chinese Named Entity Recognition [11.084360853065736]
We propose an encryption learning framework to address the problems of data leakage and inconvenient disclosure of sensitive data.
We introduce multiple encryption algorithms to encrypt training data in the named entity recognition task for the first time.
The experimental results show that the encryption method achieves satisfactory results.
arXiv Detail & Related papers (2022-08-31T04:20:37Z) - SoK: Privacy-preserving Deep Learning with Homomorphic Encryption [2.9069679115858755]
homomorphic encryption (HE) can be performed on encrypted data without revealing its content.
We take an in-depth look at approaches that combine neural networks with HE for privacy preservation.
We find numerous challenges to HE based privacy-preserving deep learning such as computational overhead, usability, and limitations posed by the encryption schemes.
arXiv Detail & Related papers (2021-12-23T22:03:27Z) - EDLaaS; Fully Homomorphic Encryption Over Neural Network Graphs [7.195443855063635]
We use the 4th generation Cheon, Kim, Kim and Song (CKKS) FHE scheme over fixed points provided by the Microsoft Simple Encrypted Arithmetic Library (MS-SEAL)
We find that FHE is not a panacea for all privacy preserving machine learning (PPML) problems, and that certain limitations still remain, such as model training.
We focus on convolutional neural networks (CNNs), fashion-MNIST, and levelled FHE operations.
arXiv Detail & Related papers (2021-10-26T12:43:35Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.