Fighting COVID-19 in the Dark: Methodology for Improved Inference Using
Homomorphically Encrypted DNN
- URL: http://arxiv.org/abs/2111.03362v1
- Date: Fri, 5 Nov 2021 10:04:15 GMT
- Title: Fighting COVID-19 in the Dark: Methodology for Improved Inference Using
Homomorphically Encrypted DNN
- Authors: Moran Baruch, Lev Greenberg and Guy Moshkowich
- Abstract summary: homomorphic encryption (HE) has been used as a method to enable analytics while addressing privacy concerns.
There are several challenges related to the use of HE, including size limitations and the lack of support for some operation types.
We propose a structured methodology to replace ReLU with a quadratic activation.
- Score: 3.1959970303072396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy-preserving deep neural network (DNN) inference is a necessity in
different regulated industries such as healthcare, finance, and retail.
Recently, homomorphic encryption (HE) has been used as a method to enable
analytics while addressing privacy concerns. HE enables secure predictions over
encrypted data. However, there are several challenges related to the use of HE,
including DNN size limitations and the lack of support for some operation
types. Most notably, the commonly used ReLU activation is not supported under
some HE schemes. We propose a structured methodology to replace ReLU with a
quadratic polynomial activation. To address the accuracy degradation issue, we
use a pre-trained model that trains another HE-friendly model, using techniques
such as "trainable activation" functions and knowledge distillation. We
demonstrate our methodology on the AlexNet architecture, using the chest X-Ray
and CT datasets for COVID-19 detection. Our experiments show that by using our
approach, the gap between the F1 score and accuracy of the models trained with
ReLU and the HE-friendly model is narrowed down to within a mere 1.1 - 5.3
percent degradation.
Related papers
- Augmented Neural Fine-Tuning for Efficient Backdoor Purification [16.74156528484354]
Recent studies have revealed the vulnerability of deep neural networks (DNNs) to various backdoor attacks.
We propose Neural mask Fine-Tuning (NFT) with an aim to optimally re-organize the neuron activities.
NFT relaxes the trigger synthesis process and eliminates the requirement of the adversarial search module.
arXiv Detail & Related papers (2024-07-14T02:36:54Z) - Protecting Deep Learning Model Copyrights with Adversarial Example-Free Reuse Detection [5.72647692625489]
Reuse and replication of deep neural networks (DNNs) can lead to copyright infringement and economic loss to the model owner.
Existing white-box testing-based approaches cannot address the common heterogeneous reuse case where the model architecture is changed.
We propose NFARD, a Neuron Functionality Analysis-based Reuse Detector, which only requires normal test samples to detect reuse relations.
arXiv Detail & Related papers (2024-07-04T12:21:59Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Privacy Preserving Federated Learning with Convolutional Variational
Bottlenecks [2.1301560294088318]
Recent work has proposed to prevent gradient leakage without loss of model utility by incorporating a PRivacy EnhanCing mODulE (PRECODE) based on variational modeling.
We show that variational modeling introducesity into gradients of PRECODE and the subsequent layers in a neural network.
We formulate an attack that disables the privacy preserving effect of PRECODE by purposefully omitting gradient gradients during attack optimization.
arXiv Detail & Related papers (2023-09-08T16:23:25Z) - Training Large Scale Polynomial CNNs for E2E Inference over Homomorphic
Encryption [33.35896071292604]
Training large-scale CNNs that during inference can be run under Homomorphic Encryption (HE) is challenging.
We provide a novel training method for large CNNs such as ResNet-152 and ConvNeXt models.
arXiv Detail & Related papers (2023-04-26T20:41:37Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Is Neuron Coverage Needed to Make Person Detection More Robust? [3.395452700023097]
In this work, we apply coverage-guided testing (CGT) to the task of person detection in crowded scenes.
The proposed pipeline uses YOLOv3 for person detection and includes finding bugs via sampling and mutation.
We have found no evidence that the investigated coverage metrics can be advantageously used to improve robustness.
arXiv Detail & Related papers (2022-04-21T11:23:33Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.