BadTime: An Effective Backdoor Attack on Multivariate Long-Term Time Series Forecasting
- URL: http://arxiv.org/abs/2508.04189v1
- Date: Wed, 06 Aug 2025 08:18:01 GMT
- Title: BadTime: An Effective Backdoor Attack on Multivariate Long-Term Time Series Forecasting
- Authors: Kunlan Xiang, Haomiao Yang, Meng Hao, Haoxin Wang, Shaofeng Li, Wenbo Jiang,
- Abstract summary: We propose the first effective attack method named BadTime against MLTSF models.<n>BadTime executes a backdoor attack by poisoning training data and customizing the backdoor training process.<n>We show that BadTime significantly outperforms state-of-the-art (SOTA) backdoor attacks on time series forecasting.
- Score: 7.944280447232543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate Long-Term Time Series Forecasting (MLTSF) models are increasingly deployed in critical domains such as climate, finance, and transportation. Although a variety of powerful MLTSF models have been proposed to improve predictive performance, the robustness of MLTSF models against malicious backdoor attacks remains entirely unexplored, which is crucial to ensuring their reliable and trustworthy deployment. To address this gap, we conduct an in-depth study on backdoor attacks against MLTSF models and propose the first effective attack method named BadTime. BadTime executes a backdoor attack by poisoning training data and customizing the backdoor training process. During data poisoning, BadTime proposes a contrast-guided strategy to select the most suitable training samples for poisoning, then employs a graph attention network to identify influential variables for trigger injection. Subsequently, BadTime further localizes optimal positions for trigger injection based on lag analysis and proposes a puzzle-like trigger structure that distributes the trigger across multiple poisoned variables to jointly steer the prediction of the target variable. During backdoor training, BadTime alternately optimizes the model and triggers via proposed tailored optimization objectives. Extensive experiments show that BadTime significantly outperforms state-of-the-art (SOTA) backdoor attacks on time series forecasting by reducing MAE by over 50% on target variables and boosting stealthiness by more than 3 times.
Related papers
- TooBadRL: Trigger Optimization to Boost Effectiveness of Backdoor Attacks on Deep Reinforcement Learning [38.79063331759597]
TooBadRL is a framework to systematically optimize DRL backdoor triggers along three critical axes, i.e., temporal, spatial, and magnitude.<n>We show that TooBadRL significantly improves attack success rates, while ensuring minimal degradation of normal task performance.
arXiv Detail & Related papers (2025-06-11T09:50:17Z) - Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain [13.76843963426352]
Time series classification (TSC) is a cornerstone of modern web applications.<n>Deep neural networks (DNNs) have greatly enhanced the performance of TSC models in critical domains.
arXiv Detail & Related papers (2025-03-12T18:05:32Z) - Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting [14.579802892916101]
Large Language Models (LLMs) have recently demonstrated significant potential in time series forecasting.<n>However, their robustness and reliability in real-world applications remain under-explored.<n>We introduce a targeted adversarial attack framework for LLM-based time series forecasting.
arXiv Detail & Related papers (2024-12-11T04:53:15Z) - Behavior Backdoor for Deep Learning Models [95.50787731231063]
We take the first step towards behavioral backdoor'' attack, which is defined as a behavior-triggered backdoor model training procedure.<n>We propose the first pipeline of implementing behavior backdoor, i.e., the Quantification Backdoor (QB) attack.<n>Experiments have been conducted on different models, datasets, and tasks, demonstrating the effectiveness of this novel backdoor attack.
arXiv Detail & Related papers (2024-12-02T10:54:02Z) - Mind the Cost of Scaffold! Benign Clients May Even Become Accomplices of Backdoor Attack [16.104941796138128]
BadSFL is the first backdoor attack targeting Scaffold.<n>It steers benign clients' local gradient updates towards the attacker's poisoned direction, effectively turning them into unwitting accomplices.<n>BadSFL achieves superior attack durability, maintaining effectiveness for over 60 global rounds, lasting up to three times longer than existing baselines.
arXiv Detail & Related papers (2024-11-25T07:46:57Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - BACKTIME: Backdoor Attacks on Multivariate Time Series Forecasting [43.43987251457314]
We propose an effective attack method named BackTime.
By subtly injecting a few stealthy triggers into the MTS data, BackTime can alter the predictions of the forecasting model according to the attacker's intent.
BackTime first identifies vulnerable timestamps in the data for poisoning, and then adaptively synthesizes stealthy and effective triggers.
arXiv Detail & Related papers (2024-10-03T04:16:49Z) - Revisiting Backdoor Attacks against Large Vision-Language Models from Domain Shift [104.76588209308666]
This paper explores backdoor attacks in LVLM instruction tuning across mismatched training and testing domains.<n>We introduce a new evaluation dimension, backdoor domain generalization, to assess attack robustness.<n>We propose a multimodal attribution backdoor attack (MABA) that injects domain-agnostic triggers into critical areas.
arXiv Detail & Related papers (2024-06-27T02:31:03Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection [1.624124570511833]
We introduce FLANDERS, a novel pre-aggregation filter for FL resilient to large-scale model poisoning attacks.<n>Experiments conducted in several non-iid FL setups show that FLANDERS significantly improves robustness across a wide spectrum of attacks when paired with standard and robust existing aggregation methods.
arXiv Detail & Related papers (2023-03-29T13:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.