Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
- URL: http://arxiv.org/abs/2405.04049v1
- Date: Tue, 7 May 2024 06:42:30 GMT
- Title: Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
- Authors: Hamed Poursiami, Ihsen Alouani, Maryam Parsa,
- Abstract summary: spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions.
Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse.
We pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms.
- Score: 3.4673556247932225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As spiking neural networks (SNNs) gain traction in deploying neuromorphic computing solutions, protecting their intellectual property (IP) has become crucial. Without adequate safeguards, proprietary SNN architectures are at risk of theft, replication, or misuse, which could lead to significant financial losses for the owners. While IP protection techniques have been extensively explored for artificial neural networks (ANNs), their applicability and effectiveness for the unique characteristics of SNNs remain largely unexplored. In this work, we pioneer an investigation into adapting two prominent watermarking approaches, namely, fingerprint-based and backdoor-based mechanisms to secure proprietary SNN architectures. We conduct thorough experiments to evaluate the impact on fidelity, resilience against overwrite threats, and resistance to compression attacks when applying these watermarking techniques to SNNs, drawing comparisons with their ANN counterparts. This study lays the groundwork for developing neuromorphic-aware IP protection strategies tailored to the distinctive dynamics of SNNs.
Related papers
- Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks [3.9444202574850755]
Spiking Neural Networks (SNNs) are known for their low energy consumption and high robustness.
This paper explores the robustness performance of SNNs trained by supervised learning rules under backdoor attacks.
arXiv Detail & Related papers (2024-09-24T02:15:19Z) - Enhancing Adversarial Robustness in SNNs with Sparse Gradients [46.15229142258264]
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures.
Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks.
We propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization.
arXiv Detail & Related papers (2024-05-30T05:39:27Z) - BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks [3.4673556247932225]
Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data.
Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties.
We develop novel inversion attack strategies that are comprehensively designed to target SNNs.
arXiv Detail & Related papers (2024-02-01T03:16:40Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Deep Intellectual Property Protection: A Survey [70.98782484559408]
Deep Neural Networks (DNNs) have made revolutionary progress in recent years, and are widely used in various fields.
The goal of this paper is to provide a comprehensive survey of two mainstream DNN IP protection methods: deep watermarking and deep fingerprinting.
arXiv Detail & Related papers (2023-04-28T03:34:43Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - DeepPeep: Exploiting Design Ramifications to Decipher the Architecture
of Compact DNNs [2.3651168422805027]
"DeepPeep" is a two-stage attack methodology to reverse-engineer the architecture of building blocks in compact DNNs.
"Secure MobileNet-V1" provides a significant reduction in inference latency and improvement in predictive performance.
arXiv Detail & Related papers (2020-07-30T06:01:41Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - NeuroAttack: Undermining Spiking Neural Networks Security through
Externally Triggered Bit-Flips [11.872768663147776]
Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems.
While these systems are going mainstream, they have inherent security and reliability issues.
We propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues.
arXiv Detail & Related papers (2020-05-16T16:54:00Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.