Securing Neural Networks with Knapsack Optimization
- URL: http://arxiv.org/abs/2304.10442v2
- Date: Fri, 29 Dec 2023 11:34:06 GMT
- Title: Securing Neural Networks with Knapsack Optimization
- Authors: Yakir Gorski, Amir Jevnisek, Shai Avidan
- Abstract summary: In this paper, we focus on ResNets, which serve as the backbone for many Computer Vision tasks.
We aim to reduce their non-linear components, specifically, the number of ReLUs.
We devise an algorithm to choose the optimal set of patch sizes through a novel reduction of the problem to the Knapsack Problem.
- Score: 12.998637003026273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: MLaaS Service Providers (SPs) holding a Neural Network would like to keep the
Neural Network weights secret. On the other hand, users wish to utilize the
SPs' Neural Network for inference without revealing their data. Multi-Party
Computation (MPC) offers a solution to achieve this. Computations in MPC
involve communication, as the parties send data back and forth. Non-linear
operations are usually the main bottleneck requiring the bulk of communication
bandwidth. In this paper, we focus on ResNets, which serve as the backbone for
many Computer Vision tasks, and we aim to reduce their non-linear components,
specifically, the number of ReLUs. Our key insight is that spatially close
pixels exhibit correlated ReLU responses. Building on this insight, we replace
the per-pixel ReLU operation with a ReLU operation per patch. We term this
approach 'Block-ReLU'. Since different layers in a Neural Network correspond to
different feature hierarchies, it makes sense to allow patch-size flexibility
for the various layers of the Neural Network. We devise an algorithm to choose
the optimal set of patch sizes through a novel reduction of the problem to the
Knapsack Problem. We demonstrate our approach in the semi-honest secure 3-party
setting for four problems: Classifying ImageNet using ResNet50 backbone,
classifying CIFAR100 using ResNet18 backbone, Semantic Segmentation of ADE20K
using MobileNetV2 backbone, and Semantic Segmentation of Pascal VOC 2012 using
ResNet50 backbone. Our approach achieves competitive performance compared to a
handful of competitors. Our source code is publicly available:
https://github.com/yg320/secure_inference.
Related papers
- An Exact Mapping From ReLU Networks to Spiking Neural Networks [3.1701886344065255]
We propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron.
More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
arXiv Detail & Related papers (2022-12-23T18:31:09Z) - Deep Learning without Shortcuts: Shaping the Kernel with Tailored
Rectifiers [83.74380713308605]
We develop a new type of transformation that is fully compatible with a variant of ReLUs -- Leaky ReLUs.
We show in experiments that our method, which introduces negligible extra computational cost, validation accuracies with deep vanilla networks that are competitive with ResNets.
arXiv Detail & Related papers (2022-03-15T17:49:08Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Lite-HRNet: A Lightweight High-Resolution Network [97.17242913089464]
We present an efficient high-resolution network, Lite-HRNet, for human pose estimation.
We find that heavily-used pointwise (1x1) convolutions in shuffle blocks become the computational bottleneck.
We introduce a lightweight unit, conditional channel weighting, to replace costly pointwise (1x1) convolutions in shuffle blocks.
arXiv Detail & Related papers (2021-04-13T17:59:31Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function
Secret Sharing [2.6228228854413356]
AriaNN is a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data.
We design primitives for the building blocks of neural networks such as ReLU, MaxPool and BatchNorm.
We implement our framework as an extension to support n-party private federated learning.
arXiv Detail & Related papers (2020-06-08T13:40:27Z) - DRU-net: An Efficient Deep Convolutional Neural Network for Medical
Image Segmentation [2.3574651879602215]
Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs)
We propose an efficient network architecture by considering advantages of both networks.
arXiv Detail & Related papers (2020-04-28T12:16:24Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Knapsack Pruning with Inner Distillation [11.04321604965426]
We propose a novel pruning method that optimize the final accuracy of the pruned network.
We prune the network channels while maintaining the high-level structure of the network.
Our method leads to state-of-the-art pruning results on ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones.
arXiv Detail & Related papers (2020-02-19T16:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.