Boosting the Robustness-Accuracy Trade-off of SNNs by Robust Temporal Self-Ensemble
- URL: http://arxiv.org/abs/2508.11279v1
- Date: Fri, 15 Aug 2025 07:34:06 GMT
- Title: Boosting the Robustness-Accuracy Trade-off of SNNs by Robust Temporal Self-Ensemble
- Authors: Jihang Wang, Dongcheng Zhao, Ruolin Chen, Qian Zhang, Yi Zeng,
- Abstract summary: Spiking Neural Networks (SNNs) offer a promising direction for energy-efficient and brain-inspired computing.<n>Our study highlights the importance of temporal structure in adversarial learning and offers a principled foundation for building robust spiking models.
- Score: 7.029504254766399
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking Neural Networks (SNNs) offer a promising direction for energy-efficient and brain-inspired computing, yet their vulnerability to adversarial perturbations remains poorly understood. In this work, we revisit the adversarial robustness of SNNs through the lens of temporal ensembling, treating the network as a collection of evolving sub-networks across discrete timesteps. This formulation uncovers two critical but underexplored challenges-the fragility of individual temporal sub-networks and the tendency for adversarial vulnerabilities to transfer across time. To overcome these limitations, we propose Robust Temporal self-Ensemble (RTE), a training framework that improves the robustness of each sub-network while reducing the temporal transferability of adversarial perturbations. RTE integrates both objectives into a unified loss and employs a stochastic sampling strategy for efficient optimization. Extensive experiments across multiple benchmarks demonstrate that RTE consistently outperforms existing training methods in robust-accuracy trade-off. Additional analyses reveal that RTE reshapes the internal robustness landscape of SNNs, leading to more resilient and temporally diversified decision boundaries. Our study highlights the importance of temporal structure in adversarial learning and offers a principled foundation for building robust spiking models.
Related papers
- Error Amplification Limits ANN-to-SNN Conversion in Continuous Control [64.99656514469972]
Spiking Neural Networks (SNNs) can achieve competitive performance by converting already existing well-trained Artificial Neural Networks (ANNs)<n>Existing conversion methods perform poorly in continuous control, where suitable baselines are largely absent.<n>We propose Cross-Step Residual Potential Initialization (CRPI), a lightweight training-free mechanism that carries over residual membrane potentials across decision steps to suppress temporally correlated errors.
arXiv Detail & Related papers (2026-01-29T14:28:00Z) - Dual-Robust Cross-Domain Offline Reinforcement Learning Against Dynamics Shifts [68.18666621908898]
Single-domain offline reinforcement learning (RL) often suffers from limited data coverage.<n>Cross-domain offline RL handles this issue by leveraging additional data from other domains with dynamics shifts.<n>In this paper, we investigate dual (both train-time and test-time) robustness against dynamics shifts in cross-domain offline RL.
arXiv Detail & Related papers (2025-12-02T07:20:39Z) - Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning [57.3885832382455]
We show that introducing static network sparsity alone can unlock further scaling potential beyond dense counterparts with state-of-the-art architectures.<n>Our analysis reveals that, in contrast to naively scaling up dense DRL networks, such sparse networks achieve both higher parameter efficiency for network expressivity.
arXiv Detail & Related papers (2025-06-20T17:54:24Z) - Proxy Target: Bridging the Gap Between Discrete Spiking Neural Networks and Continuous Control [59.65431931190187]
Spiking Neural Networks (SNNs) offer low-latency and energy-efficient decision making on neuromorphic hardware.<n>Most continuous control algorithms for continuous control are designed for Artificial Neural Networks (ANNs)<n>We show that this mismatch destabilizes SNN training and degrades performance.<n>We propose a novel proxy target framework to bridge the gap between discrete SNNs and continuous-control algorithms.
arXiv Detail & Related papers (2025-05-30T03:08:03Z) - Rethinking Spiking Neural Networks from an Ensemble Learning Perspective [4.823440259626247]
Spiking neural networks (SNNs) exhibit superior energy efficiency but suffer from limited performance.<n>In this paper, we consider SNNs as ensembles of temporalworks that share architectures and weights.<n>We promote the consistency of the initial membrane potential distribution and output through membrane potential smoothing and temporally adjacent subnetwork guidance.
arXiv Detail & Related papers (2025-02-20T03:15:52Z) - Learning Delays Through Gradients and Structure: Emergence of Spatiotemporal Patterns in Spiking Neural Networks [0.06752396542927405]
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays through two approaches.
In the latter approach, the network selects and prunes connections, optimizing the delays in sparse connectivity settings.
Our results demonstrate the potential of combining delay learning with dynamic pruning to develop efficient SNN models for temporal data processing.
arXiv Detail & Related papers (2024-07-07T11:55:48Z) - Temporal Contrastive Learning for Spiking Neural Networks [23.963069990569714]
Biologically inspired neural networks (SNNs) have garnered considerable attention due to their low-energy consumption and better-temporal information processing capabilities.
We propose a novel method to obtain SNNs with low latency and high performance by incorporating contrastive supervision with temporal domain information.
arXiv Detail & Related papers (2023-05-23T10:31:46Z) - Boosting Adversarial Robustness From The Perspective of Effective Margin
Regularization [58.641705224371876]
The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years.
This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks.
We show that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.
arXiv Detail & Related papers (2022-10-11T03:16:56Z) - TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack [6.243453526766042]
We propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC.
The core idea is a new global optimization objective known as "Camouflage Coefficient" that captures the imperceptibility of adversarial samples from the class distribution.
Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks.
arXiv Detail & Related papers (2022-09-14T03:02:22Z) - Q-TART: Quickly Training for Adversarial Robustness and
in-Transferability [28.87208020322193]
We propose to tackle Performance, Efficiency, and Robustness, using our proposed algorithm Q-TART.
Q-TART follows the intuition that samples highly susceptible to noise strongly affect the decision boundaries learned by deep neural networks.
We demonstrate improved performance and adversarial robustness while using only a subset of the training data.
arXiv Detail & Related papers (2022-04-14T15:23:08Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.