Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation
- URL: http://arxiv.org/abs/2508.13812v1
- Date: Tue, 19 Aug 2025 13:17:15 GMT
- Title: Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation
- Authors: Donghwa Kang, Doohyun Kim, Sang-Ki Ko, Jinkyu Lee, Hyeongboo Baek, Brent ByungHoon Kang,
- Abstract summary: State-of-the-art adversarial attacks on spiking neural networks (SNNs) face a critical limitation: substantial attack latency from multi-timestep processing.<n>We propose the timestep-compressed attack (TCA), a novel framework that significantly reduces attack latency.
- Score: 5.234835661080496
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: State-of-the-art (SOTA) gradient-based adversarial attacks on spiking neural networks (SNNs), which largely rely on extending FGSM and PGD frameworks, face a critical limitation: substantial attack latency from multi-timestep processing, rendering them infeasible for practical real-time applications. This inefficiency stems from their design as direct extensions of ANN paradigms, which fail to exploit key SNN properties. In this paper, we propose the timestep-compressed attack (TCA), a novel framework that significantly reduces attack latency. TCA introduces two components founded on key insights into SNN behavior. First, timestep-level backpropagation (TLBP) is based on our finding that global temporal information in backpropagation to generate perturbations is not critical for an attack's success, enabling per-timestep evaluation for early stopping. Second, adversarial membrane potential reuse (A-MPR) is motivated by the observation that initial timesteps are inefficiently spent accumulating membrane potential, a warm-up phase that can be pre-calculated and reused. Our experiments on VGG-11 and ResNet-17 with the CIFAR-10/100 and CIFAR10-DVS datasets show that TCA significantly reduces the required attack latency by up to 56.6% and 57.1% compared to SOTA methods in white-box and black-box settings, respectively, while maintaining a comparable attack success rate.
Related papers
- Time Is All It Takes: Spike-Retiming Attacks on Event-Driven Spiking Neural Networks [87.16809558673403]
Spiking neural networks (SNNs) compute with discrete spikes and exploit temporal structure.<n>We study a timing-only adversary that retimes existing spikes while preserving spike counts and amplitudes in event-driven SNNs.
arXiv Detail & Related papers (2026-02-03T09:06:53Z) - Leveraging Vulnerabilities in Temporal Graph Neural Networks via Strategic High-Impact Assaults [8.54812911128942]
Temporal Graph Neural Networks (TGNNs) have become indispensable for analyzing dynamic graphs in critical applications.<n>High Impact Attack (HIA) is a novel restricted black-box attack framework designed to expose critical vulnerabilities in TGNNs.<n>HIA significantly reduces TGNN accuracy on the link prediction task, achieving up to a 35.55% decrease in Mean Reciprocal Rank (MRR)
arXiv Detail & Related papers (2025-09-29T19:21:56Z) - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning [97.49610356913874]
We propose a robust test-time prompt tuning (R-TPT) for vision-language models (VLMs)<n>R-TPT mitigates the impact of adversarial attacks during the inference stage.<n>We introduce a plug-and-play reliability-based weighted ensembling strategy to strengthen the defense.
arXiv Detail & Related papers (2025-04-15T13:49:31Z) - WFCAT: Augmenting Website Fingerprinting with Channel-wise Attention on Timing Features [16.443437518731383]
Website Fingerprinting aims to deanonymize users on the Tor network by analyzing encrypted network traffic.<n>Recent deep-learning-based attacks show high accuracy on undefended traces.<n>But they struggle against modern defenses that use tactics like injecting dummy packets and delaying real packets.
arXiv Detail & Related papers (2024-12-16T07:04:44Z) - TEAM: Temporal Adversarial Examples Attack Model against Network Intrusion Detection System Applied to RNN [14.474274997214845]
We propose a novel RNN adversarial attack model based on feature reconstruction called textbfTemporal adversarial textbfExamples textbfAttack textbfModel textbf(TEAM).
In most attack categories, TEAM improves the misjudgment rate of NIDS on both black and white boxes, making the misjudgment rate reach more than 96.68%.
arXiv Detail & Related papers (2024-09-19T05:26:04Z) - DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial
Purification [63.65630243675792]
Diffusion-based purification defenses leverage diffusion models to remove crafted perturbations of adversarial examples.
Recent studies show that even advanced attacks cannot break such defenses effectively.
We propose a unified framework DiffAttack to perform effective and efficient attacks against diffusion-based purification defenses.
arXiv Detail & Related papers (2023-10-27T15:17:50Z) - TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Attack [6.243453526766042]
We propose an efficient method called TSFool to craft highly-imperceptible adversarial time series for RNN-based TSC.
The core idea is a new global optimization objective known as "Camouflage Coefficient" that captures the imperceptibility of adversarial samples from the class distribution.
Experiments on 11 UCR and UEA datasets showcase that TSFool significantly outperforms six white-box and three black-box benchmark attacks.
arXiv Detail & Related papers (2022-09-14T03:02:22Z) - Spatially Focused Attack against Spatiotemporal Graph Neural Networks [8.665638585791235]
Deep Spatiotemporal graph neural networks (GNNs) have achieved great success in traffic forecasting applications.
If GNNs are vulnerable in real-world prediction applications, a hacker can easily manipulate the results and cause serious traffic congestion and even a city-scale breakdown.
arXiv Detail & Related papers (2021-09-10T01:31:53Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.