BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
- URL: http://arxiv.org/abs/2402.00906v2
- Date: Tue, 7 May 2024 05:53:46 GMT
- Title: BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
- Authors: Hamed Poursiami, Ihsen Alouani, Maryam Parsa,
- Abstract summary: Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data.
Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties.
We develop novel inversion attack strategies that are comprehensively designed to target SNNs.
- Score: 3.4673556247932225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the mainstream integration of machine learning into security-sensitive domains such as healthcare and finance, concerns about data privacy have intensified. Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data. Particularly, model inversion (MI) attacks enable the reconstruction of data samples that have been used to train the model. Neuromorphic architectures have emerged as a paradigm shift in neural computing, enabling asynchronous and energy-efficient computation. However, little to no existing work has investigated the privacy of neuromorphic architectures against model inversion. Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties, especially against gradient-based attacks. To investigate this hypothesis, we propose a thorough exploration of SNNs' privacy-preserving capabilities. Specifically, we develop novel inversion attack strategies that are comprehensively designed to target SNNs, offering a comparative analysis with their conventional ANN counterparts. Our experiments, conducted on diverse event-based and static datasets, demonstrate the effectiveness of the proposed attack strategies and therefore questions the assumption of inherent privacy-preserving in neuromorphic architectures.
Related papers
- Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks [3.4673556247932225]
spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions.
Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse.
We pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms.
arXiv Detail & Related papers (2024-05-07T06:42:30Z) - A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation [16.696524554516294]
We develop a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator.
Our approach adapts high-level trajectory plans with reactive maneuvers by processing emulated event data in a convolutional SNN.
Our results motivate incorporating SNN learning, utilizing neuromorphic processors, and further exploring the potential of neuromorphic methods.
arXiv Detail & Related papers (2024-04-08T20:42:10Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
Networks with Neuromorphic Data [15.084703823643311]
spiking neural networks (SNNs) offer enhanced energy efficiency and biologically plausible data processing capabilities.
This paper delves into backdoor attacks in SNNs using neuromorphic datasets and diverse triggers.
We present various attack strategies, achieving an attack success rate of up to 100% while maintaining a negligible impact on clean accuracy.
arXiv Detail & Related papers (2023-02-13T11:34:17Z) - NASCTY: Neuroevolution to Attack Side-channel Leakages Yielding
Convolutional Neural Networks [1.1602089225841632]
Side-channel analysis (SCA) can obtain information related to the secret key by exploiting leakages produced by the device.
Researchers recently found that neural networks (NNs) can execute a powerful profiling SCA, even on targets protected with countermeasures.
This paper explores the effectiveness of Neuroevolution to Attack Side-channel Traces Yielding Convolutional Neural Networks (NASCTY-CNNs)
arXiv Detail & Related papers (2023-01-25T19:31:04Z) - How Does a Deep Learning Model Architecture Impact Its Privacy? A
Comprehensive Study of Privacy Attacks on CNNs and Transformers [18.27174440444256]
Privacy concerns arise due to the potential leakage of sensitive information from the training data.
Recent research has revealed that deep learning models are vulnerable to various privacy attacks.
arXiv Detail & Related papers (2022-10-20T06:44:37Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.