Can one hear the shape of a neural network?: Snooping the GPU via
Magnetic Side Channel
- URL: http://arxiv.org/abs/2109.07395v1
- Date: Wed, 15 Sep 2021 16:00:05 GMT
- Title: Can one hear the shape of a neural network?: Snooping the GPU via
Magnetic Side Channel
- Authors: Henrique Teles Maia, Chang Xiao, Dingzeyu Li, Eitan Grinspun, Changxi
Zheng
- Abstract summary: We explore the vulnerability of neural networks deployed as black boxes across accelerated hardware through electromagnetic side channels.
The attack acquires the magnetic signal for one query with unknown input values, but known input dimensions.
We demonstrate the potential accuracy of this side channel attack in recovering the details for a broad range of network architectures.
- Score: 42.75879156429477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network applications have become popular in both enterprise and
personal settings. Network solutions are tuned meticulously for each task, and
designs that can robustly resolve queries end up in high demand. As the
commercial value of accurate and performant machine learning models increases,
so too does the demand to protect neural architectures as confidential
investments. We explore the vulnerability of neural networks deployed as black
boxes across accelerated hardware through electromagnetic side channels. We
examine the magnetic flux emanating from a graphics processing unit's power
cable, as acquired by a cheap $3 induction sensor, and find that this signal
betrays the detailed topology and hyperparameters of a black-box neural network
model. The attack acquires the magnetic signal for one query with unknown input
values, but known input dimensions. The network reconstruction is possible due
to the modular layer sequence in which deep neural networks are evaluated. We
find that each layer component's evaluation produces an identifiable magnetic
signal signature, from which layer topology, width, function type, and sequence
order can be inferred using a suitably trained classifier and a joint
consistency optimization based on integer programming. We study the extent to
which network specifications can be recovered, and consider metrics for
comparing network similarity. We demonstrate the potential accuracy of this
side channel attack in recovering the details for a broad range of network
architectures, including random designs. We consider applications that may
exploit this novel side channel exposure, such as adversarial transfer attacks.
In response, we discuss countermeasures to protect against our method and other
similar snooping techniques.
Related papers
- Graph Metanetworks for Processing Diverse Neural Architectures [33.686728709734105]
Graph Metanetworks (GMNs) generalizes to neural architectures where competing methods struggle.
We prove that GMNs are expressive and equivariant to parameter permutation symmetries that leave the input neural network functions.
arXiv Detail & Related papers (2023-12-07T18:21:52Z) - Bandwidth-efficient distributed neural network architectures with
application to body sensor networks [73.02174868813475]
This paper describes a conceptual design methodology to design distributed neural network architectures.
We show that the proposed framework enables up to a factor 20 in bandwidth reduction with minimal loss.
While the application focus of this paper is on wearable brain-computer interfaces, the proposed methodology can be applied in other sensor network-like applications as well.
arXiv Detail & Related papers (2022-10-14T12:35:32Z) - Signal Detection in MIMO Systems with Hardware Imperfections: Message
Passing on Neural Networks [101.59367762974371]
In this paper, we investigate signal detection in multiple-input-multiple-output (MIMO) communication systems with hardware impairments.
It is difficult to train a deep neural network (DNN) with limited pilot signals, hindering its practical applications.
We design an efficient message passing based Bayesian signal detector, leveraging the unitary approximate message passing (UAMP) algorithm.
arXiv Detail & Related papers (2022-10-08T04:32:58Z) - Dynamic Network Reconfiguration for Entropy Maximization using Deep
Reinforcement Learning [3.012947865628207]
Key problem in network theory is how to reconfigure a graph in order to optimize a quantifiable objective.
In this paper, we cast the problem of network rewiring for optimizing a specified structural property as a Markov Decision Process (MDP)
We then propose a general approach based on the Deep Q-Network (DQN) algorithm and graph neural networks (GNNs) that can efficiently learn strategies for rewiring networks.
arXiv Detail & Related papers (2022-05-26T18:44:22Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - An error-propagation spiking neural network compatible with neuromorphic
processors [2.432141667343098]
We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
arXiv Detail & Related papers (2021-04-12T07:21:08Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection
in Neural Networks [3.125321230840342]
Adversarial examples are inputs that have been carefully perturbed to fool classifier networks, while appearing unchanged to humans.
We propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
We show that our method improves state-of-the-art detector robustness against adversarial examples.
arXiv Detail & Related papers (2021-01-05T14:31:53Z) - Neural-network-based parameter estimation for quantum detection [0.0]
In the context of quantum detection schemes, neural networks find a natural playground.
We demonstrate that adequately trained neural networks enable to characterize a target with minimal knowledge of the underlying physical model.
We exemplify the method with a development for $171$Yb$+$ atomic sensors.
arXiv Detail & Related papers (2020-12-14T16:26:05Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.