Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
- URL: http://arxiv.org/abs/2410.05750v1
- Date: Tue, 8 Oct 2024 07:27:55 GMT
- Title: Polynomial Time Cryptanalytic Extraction of Deep Neural Networks in the Hard-Label Setting
- Authors: Nicholas Carlini, Jorge Chávez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Adi Shamir,
- Abstract summary: Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns.
This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting.
- Score: 45.68094593114181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are valuable assets, yet their public accessibility raises security concerns about parameter extraction by malicious actors. Recent work by Carlini et al. (crypto'20) and Canales-Mart\'inez et al. (eurocrypt'24) has drawn parallels between this issue and block cipher key extraction via chosen plaintext attacks. Leveraging differential cryptanalysis, they demonstrated that all the weights and biases of black-box ReLU-based DNNs could be inferred using a polynomial number of queries and computational time. However, their attacks relied on the availability of the exact numeric value of output logits, which allowed the calculation of their derivatives. To overcome this limitation, Chen et al. (asiacrypt'24) tackled the more realistic hard-label scenario, where only the final classification label (e.g., "dog" or "car") is accessible to the attacker. They proposed an extraction method requiring a polynomial number of queries but an exponential execution time. In addition, their approach was applicable only to a restricted set of architectures, could deal only with binary classifiers, and was demonstrated only on tiny neural networks with up to four neurons split among up to two hidden layers. This paper introduces new techniques that, for the first time, achieve cryptanalytic extraction of DNN parameters in the most challenging hard-label setting, using both a polynomial number of queries and polynomial time. We validate our approach by extracting nearly one million parameters from a DNN trained on the CIFAR-10 dataset, comprising 832 neurons in four hidden layers. Our results reveal the surprising fact that all the weights of a ReLU-based DNN can be efficiently determined by analyzing only the geometric shape of its decision boundaries.
Related papers
- Reduction from sparse LPN to LPN, Dual Attack 3.0 [1.9106435311144374]
A new algorithm called RLPN-decoding which relies on a completely different approach was introduced.
It outperforms significantly the ISD's and RLPN for code rates smaller than 0.42.
This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography.
arXiv Detail & Related papers (2023-12-01T17:35:29Z) - Mitigating Backdoors within Deep Neural Networks in Data-limited
Configuration [1.1663475941322277]
A backdoored deep neural network shows normal behavior on clean data while behaving maliciously once a trigger is injected into a sample at the test time.
In this paper, we formulate some characteristics of poisoned neurons.
This backdoor suspiciousness score can rank network neurons according to their activation values, weights, and their relationship with other neurons in the same layer.
arXiv Detail & Related papers (2023-11-13T15:54:27Z) - Polynomial Time Cryptanalytic Extraction of Neural Network Models [3.3466632238361393]
Best current attack on ReLU-based deep neural networks was presented at Crypto 2020.
New techniques enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based neural network.
arXiv Detail & Related papers (2023-10-12T20:44:41Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Efficient Decompositional Rule Extraction for Deep Neural Networks [5.69361786082969]
ECLAIRE is a novel-time rule extraction algorithm capable of scaling to both large DNN architectures and large training datasets.
We show that ECLAIRE consistently extracts more accurate and comprehensible rule sets than the current state-of-the-art methods.
arXiv Detail & Related papers (2021-11-24T16:54:10Z) - Efficiently Learning Any One Hidden Layer ReLU Network From Queries [27.428198343906352]
We give the first-time algorithm for learning arbitrary one hidden layer neural networks activations provided black-box access to the network.
Ours is the first with fully-time guarantees of efficiency even for worst-case networks.
arXiv Detail & Related papers (2021-11-08T18:59:40Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.