Towards Efficient Formal Verification of Spiking Neural Network
- URL: http://arxiv.org/abs/2408.10900v1
- Date: Tue, 20 Aug 2024 14:43:33 GMT
- Title: Towards Efficient Formal Verification of Spiking Neural Network
- Authors: Baekryun Seong, Jieung Kim, Sang-Ki Ko,
- Abstract summary: spiking neural networks (SNNs) operate event-driven, like the human brain, and compress information temporally.
In this paper, we introduce temporal encoding to achieve practical performance in verifying the adversarial robustness of SNNs.
- Score: 2.771933807499954
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, AI research has primarily focused on large language models (LLMs), and increasing accuracy often involves scaling up and consuming more power. The power consumption of AI has become a significant societal issue; in this context, spiking neural networks (SNNs) offer a promising solution. SNNs operate event-driven, like the human brain, and compress information temporally. These characteristics allow SNNs to significantly reduce power consumption compared to perceptron-based artificial neural networks (ANNs), highlighting them as a next-generation neural network technology. However, societal concerns regarding AI go beyond power consumption, with the reliability of AI models being a global issue. For instance, adversarial attacks on AI models are a well-studied problem in the context of traditional neural networks. Despite their importance, the stability and property verification of SNNs remains in the early stages of research. Most SNN verification methods are time-consuming and barely scalable, making practical applications challenging. In this paper, we introduce temporal encoding to achieve practical performance in verifying the adversarial robustness of SNNs. We conduct a theoretical analysis of this approach and demonstrate its success in verifying SNNs at previously unmanageable scales. Our contribution advances SNN verification to a practical level, facilitating the safer application of SNNs.
Related papers
- Enhancing Adversarial Robustness in SNNs with Sparse Gradients [46.15229142258264]
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures.
Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks.
We propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization.
arXiv Detail & Related papers (2024-05-30T05:39:27Z) - Efficient and Effective Time-Series Forecasting with Spiking Neural Networks [47.371024581669516]
Spiking neural networks (SNNs) provide a unique pathway for capturing the intricacies of temporal data.
Applying SNNs to time-series forecasting is challenging due to difficulties in effective temporal alignment, complexities in encoding processes, and the absence of standardized guidelines for model selection.
We propose a framework for SNNs in time-series forecasting tasks, leveraging the efficiency of spiking neurons in processing temporal information.
arXiv Detail & Related papers (2024-02-02T16:23:50Z) - Brain-Inspired Spiking Neural Networks for Industrial Fault Diagnosis: A Survey, Challenges, and Opportunities [10.371337760495521]
Spiking Neural Network (SNN) is founded on principles of Brain-inspired computing.
This paper systematically reviews the theoretical progress of SNN-based models to answer the question of what SNN is.
arXiv Detail & Related papers (2023-11-13T11:25:34Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Uncovering the Representation of Spiking Neural Networks Trained with
Surrogate Gradient [11.0542573074431]
Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency.
Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training.
arXiv Detail & Related papers (2023-04-25T19:08:29Z) - Fluctuation-driven initialization for spiking neural network training [3.976291254896486]
Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain.
We develop a general strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain.
arXiv Detail & Related papers (2022-06-21T09:48:49Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for
Network Intrusion Detection [6.980076213134383]
Spiking neural network (SNN) is interesting due to its strong bio-plausibility and high energy efficiency.
However, its performance is falling far behind conventional deep neural networks (DNNs)
arXiv Detail & Related papers (2020-10-15T14:46:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.