On the Privacy-Preserving Properties of Spiking Neural Networks with Unique Surrogate Gradients and Quantization Levels
- URL: http://arxiv.org/abs/2502.18623v1
- Date: Tue, 25 Feb 2025 20:14:14 GMT
- Title: On the Privacy-Preserving Properties of Spiking Neural Networks with Unique Surrogate Gradients and Quantization Levels
- Authors: Ayana Moshruba, Shay Snyder, Hamed Poursiami, Maryam Parsa,
- Abstract summary: Membership attacks (MIAs) exploit model responses to infer whether specific data points were used during training.<n>Prior research suggests that spiking neural networks (SNNs) exhibit greater resilience to MIAs than artificial neural networks (ANNs)<n>This resilience stems from their non-differentiable activations and inherentaccuracy, which obscure the correlation between model responses and individual training samples.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning models increasingly process sensitive data, understanding their vulnerability to privacy attacks is vital. Membership inference attacks (MIAs) exploit model responses to infer whether specific data points were used during training, posing a significant privacy risk. Prior research suggests that spiking neural networks (SNNs), which rely on event-driven computation and discrete spike-based encoding, exhibit greater resilience to MIAs than artificial neural networks (ANNs). This resilience stems from their non-differentiable activations and inherent stochasticity, which obscure the correlation between model responses and individual training samples. To enhance privacy in SNNs, we explore two techniques: quantization and surrogate gradients. Quantization, which reduces precision to limit information leakage, has improved privacy in ANNs. Given SNNs' sparse and irregular activations, quantization may further disrupt the activation patterns exploited by MIAs. We assess the vulnerability of SNNs and ANNs under weight and activation quantization across multiple datasets, using the attack model's receiver operating characteristic (ROC) curve area under the curve (AUC) metric, where lower values indicate stronger privacy, and evaluate the privacy-accuracy trade-off. Our findings show that quantization enhances privacy in both architectures with minimal performance loss, though full-precision SNNs remain more resilient than quantized ANNs. Additionally, we examine the impact of surrogate gradients on privacy in SNNs. Among five evaluated gradients, spike rate escape provides the best privacy-accuracy trade-off, while arctangent increases vulnerability to MIAs. These results reinforce SNNs' inherent privacy advantages and demonstrate that quantization and surrogate gradient selection significantly influence privacy-accuracy trade-offs in SNNs.
Related papers
- On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis [1.8029689470712593]
Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications.
In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs)
MIAs is a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset.
arXiv Detail & Related papers (2025-02-18T15:19:20Z) - Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study [3.4673556247932225]
Spiking Neural Networks (SNNs) are emerging as promising alternatives to Artificial Neural Networks (ANNs)<n>This paper examines whether SNNs inherently offer better privacy.<n>We analyze the impact of learning algorithms (surrogate gradient and evolutionary), frameworks (snnTorch, TENNLab, LAVA), and parameters on SNN privacy.
arXiv Detail & Related papers (2024-11-10T22:18:53Z) - Membership Privacy Evaluation in Deep Spiking Neural Networks [32.42695393291052]
Spiking Neural Networks (SNNs) mimic neurons with non-linear functions to output floating-point numbers.
In this paper, we evaluate the membership privacy of SNNs by considering eight MIAs.
We show that SNNs are more vulnerable (maximum 10% higher in terms of balanced attack accuracy) than ANNs when both are trained with neuromorphic datasets.
arXiv Detail & Related papers (2024-09-28T17:13:04Z) - BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks [3.4673556247932225]
Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data.
Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties.
We develop novel inversion attack strategies that are comprehensively designed to target SNNs.
arXiv Detail & Related papers (2024-02-01T03:16:40Z) - Low Latency of object detection for spikng neural network [3.404826786562694]
Spiking Neural Networks are well-suited for edge AI applications due to their binary spike nature.
In this paper, we focus on generating highly accurate and low-latency SNNs specifically for object detection.
arXiv Detail & Related papers (2023-09-27T10:26:19Z) - Threshold KNN-Shapley: A Linear-Time and Privacy-Friendly Approach to
Data Valuation [57.36638157108914]
Data valuation aims to quantify the usefulness of individual data sources in training machine learning (ML) models.
However, data valuation faces significant yet frequently overlooked privacy challenges despite its importance.
This paper studies these challenges with a focus on KNN-Shapley, one of the most practical data valuation methods nowadays.
arXiv Detail & Related papers (2023-08-30T02:12:00Z) - Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks [66.0143583366533]
Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications.
To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations.
Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance.
Yet, the interplay between these two aspects remains unexplored.
arXiv Detail & Related papers (2023-01-30T14:52:23Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - DPSNN: A Differentially Private Spiking Neural Network with Temporal
Enhanced Pooling [6.63071861272879]
Spiking neural network (SNN), the new generation of artificial neural networks, plays a crucial role in many fields.
This paper combines the differential privacy(DP) algorithm with SNN and proposes a differentially private spiking neural network (DPSNN)
The SNN uses discrete spike sequences to transmit information, combined with the gradient noise introduced by DP so that SNN maintains strong privacy protection.
arXiv Detail & Related papers (2022-05-24T05:27:53Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.