Polynomial Time Cryptanalytic Extraction of Neural Network Models
- URL: http://arxiv.org/abs/2310.08708v1
- Date: Thu, 12 Oct 2023 20:44:41 GMT
- Title: Polynomial Time Cryptanalytic Extraction of Neural Network Models
- Authors: Adi Shamir, Isaac Canales-Martinez, Anna Hambitzer, Jorge Chavez-Saab,
Francisco Rodrigez-Henriquez, and Nitin Satpute
- Abstract summary: Best current attack on ReLU-based deep neural networks was presented at Crypto 2020.
New techniques enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based neural network.
- Score: 3.3466632238361393
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Billions of dollars and countless GPU hours are currently spent on training
Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to
determine the difficulty of extracting all the parameters of such neural
networks when given access to their black-box implementations. Many versions of
this problem have been studied over the last 30 years, and the best current
attack on ReLU-based deep neural networks was presented at Crypto 2020 by
Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext
attack on a cryptosystem, which has a secret key embedded in its black-box
implementation and requires a polynomial number of queries but an exponential
amount of time (as a function of the number of neurons). In this paper, we
improve this attack by developing several new techniques that enable us to
extract with arbitrarily high precision all the real-valued parameters of a
ReLU-based DNN using a polynomial number of queries and a polynomial amount of
time. We demonstrate its practical efficiency by applying it to a full-sized
neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8
hidden layers with 256 neurons each, and over million neuronal parameters. An
attack following the approach by Carlini et al. requires an exhaustive search
over 2 to the power 256 possibilities. Our attack replaces this with our new
techniques, which require only 30 minutes on a 256-core computer.
Related papers
- Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting [45.68094593114181]
Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns.
This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting.
arXiv Detail & Related papers (2024-10-08T07:27:55Z) - Hard-Label Cryptanalytic Extraction of Neural Network Models [10.568722566232127]
We propose the first attack that theoretically achieves functionally equivalent extraction under the hard-label setting.
The effectiveness of our attack is validated through practical experiments on a wide range of ReLU neural networks.
arXiv Detail & Related papers (2024-09-18T02:17:10Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Quantum Neuron Selection: Finding High Performing Subnetworks With
Quantum Algorithms [0.0]
Recently, it's been shown that large, randomly neural networks containworks that perform as well as fully trained models.
This insight offers a promising avenue for training future neural networks by simply pruning weights from large, random models.
In this paper, we explore how quantum algorithms could be formulated and applied to this neuron selection problem.
arXiv Detail & Related papers (2023-02-12T19:19:48Z) - Neural Network Optimization for Reinforcement Learning Tasks Using
Sparse Computations [3.4328283704703866]
This article proposes a sparse computation-based method for optimizing neural networks for reinforcement learning tasks.
It significantly reduces the number of multiplications when running neural networks.
arXiv Detail & Related papers (2022-01-07T18:09:23Z) - Neural networks with linear threshold activations: structure and
algorithms [1.795561427808824]
We show that 2 hidden layers are necessary and sufficient to represent any function representable in the class.
We also give precise bounds on the sizes of the neural networks required to represent any function in the class.
We propose a new class of neural networks that we call shortcut linear threshold networks.
arXiv Detail & Related papers (2021-11-15T22:33:52Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z) - Cryptanalytic Extraction of Neural Network Models [56.738871473622865]
We introduce a differential attack that can efficiently steal the parameters of the remote model up to floating point precision.
Our attack relies on the fact that ReLU neural networks are piecewise linear functions.
We extract models that are 220 times more precise and require 100x fewer queries than prior work.
arXiv Detail & Related papers (2020-03-10T17:57:14Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.