Deploying Convolutional Networks on Untrusted Platforms Using 2D
Holographic Reduced Representations
- URL: http://arxiv.org/abs/2206.05893v1
- Date: Mon, 13 Jun 2022 03:31:39 GMT
- Title: Deploying Convolutional Networks on Untrusted Platforms Using 2D
Holographic Reduced Representations
- Authors: Mohammad Mahmudul Alam, Edward Raff, Tim Oates, James Holt
- Abstract summary: We create a neural network with a pseudo-encryption style defense that empirically shows robustness to attack.
By leveraging Holographic Symbolic Reduced Representations (HRR), we create a neural network with a pseudo-encryption style defense that empirically shows robustness to attack.
- Score: 33.26156710843837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the computational cost of running inference for a neural network, the
need to deploy the inferential steps on a third party's compute environment or
hardware is common. If the third party is not fully trusted, it is desirable to
obfuscate the nature of the inputs and outputs, so that the third party can not
easily determine what specific task is being performed. Provably secure
protocols for leveraging an untrusted party exist but are too computational
demanding to run in practice. We instead explore a different strategy of fast,
heuristic security that we call Connectionist Symbolic Pseudo Secrets. By
leveraging Holographic Reduced Representations (HRR), we create a neural
network with a pseudo-encryption style defense that empirically shows
robustness to attack, even under threat models that unrealistically favor the
adversary.
Related papers
- Convex neural network synthesis for robustness in the 1-norm [0.0]
This paper proposes a method to generate an approximation of a neural network which is certifiably more robust.
An application to robustifying model predictive control is used to demonstrate the results.
arXiv Detail & Related papers (2024-05-29T12:17:09Z) - Secure Deep Learning-based Distributed Intelligence on Pocket-sized
Drones [75.80952211739185]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard.
Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted.
We propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone.
arXiv Detail & Related papers (2023-07-04T08:29:41Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - An Analysis of Robustness of Non-Lipschitz Networks [35.64511156980701]
Small input perturbations can often produce large movements in the network's final-layer feature space.
In our model, the adversary may move data an arbitrary distance in feature space but only in random low-dimensional subspaces.
We provide theoretical guarantees for setting algorithm parameters to optimize over accuracy-abstention trade-offs using data-driven methods.
arXiv Detail & Related papers (2020-10-13T03:56:39Z) - Robust Deep Learning Ensemble against Deception [11.962128272844158]
XEnsemble is a diversity ensemble verification methodology for enhancing the adversarial robustness of machine learning models.
We show that XEnsemble achieves a high defense success rate against adversarial examples and a high detection success rate against out-of-distribution data inputs.
arXiv Detail & Related papers (2020-09-14T17:20:01Z) - MPC-enabled Privacy-Preserving Neural Network Training against Malicious
Attack [44.50542274828587]
We propose an approach for constructing efficient $n$-party protocols for secure neural network training.
Our actively secure neural network training incurs affordable efficiency overheads of around 2X and 2.7X in LAN and WAN settings.
Besides, we propose a scheme to allow additive shares defined over an integer ring $mathbbZ_N$ to be securely converted to additive shares over a finite field $mathbbZ_Q$.
arXiv Detail & Related papers (2020-07-24T15:03:51Z) - Industrial Scale Privacy Preserving Deep Neural Network [23.690146141150407]
We propose an industrial scale privacy preserving neural network learning paradigm, which is secure against semi-honest adversaries.
We conduct experiments on real-world fraud detection dataset and financial distress prediction dataset.
arXiv Detail & Related papers (2020-03-11T10:15:37Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.