Backdoor Attacks on Contrastive Continual Learning for IoT Systems
- URL: http://arxiv.org/abs/2602.13062v1
- Date: Fri, 13 Feb 2026 16:17:25 GMT
- Title: Backdoor Attacks on Contrastive Continual Learning for IoT Systems
- Authors: Alfous Tim, Kuniyilh Simi D,
- Abstract summary: Internet of Things (IoT) systems increasingly depend on continual learning to adapt to non-stationary environments.<n> Contrastive continual learning (CCL) combines contrastive representation learning with incremental adaptation, enabling robust feature reuse.<n>Backdoor attacks can exploit embedding alignment and replay reinforcement, enabling the implantation of persistent malicious behaviors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Internet of Things (IoT) systems increasingly depend on continual learning to adapt to non-stationary environments. These environments can include factors such as sensor drift, changing user behavior, device aging, and adversarial dynamics. Contrastive continual learning (CCL) combines contrastive representation learning with incremental adaptation, enabling robust feature reuse across tasks and domains. However, the geometric nature of contrastive objectives, when paired with replay-based rehearsal and stability-preserving regularization, introduces new security vulnerabilities. Notably, backdoor attacks can exploit embedding alignment and replay reinforcement, enabling the implantation of persistent malicious behaviors that endure through updates and deployment cycles. This paper provides a comprehensive analysis of backdoor attacks on CCL within IoT systems. We formalize the objectives of embedding-level attacks, examine persistence mechanisms unique to IoT deployments, and develop a layered taxonomy tailored to IoT. Additionally, we compare vulnerabilities across various learning paradigms and evaluate defense strategies under IoT constraints, including limited memory, edge computing, and federated aggregation. Our findings indicate that while CCL is effective for enhancing adaptive IoT intelligence, it may also elevate long-lived representation-level threats if not adequately secured.
Related papers
- Quantifying Catastrophic Forgetting in IoT Intrusion Detection Systems [1.7297586889191063]
Distribution shifts in attack patterns within RPL-based IoT networks pose a critical threat to the reliability and security of large-scale connected systems.<n>Intrusion Detection Systems (IDS) trained on static datasets often fail to generalize to unseen threats.<n>We propose a method-agnostic IDS framework that can integrate diverse continual learning strategies.
arXiv Detail & Related papers (2026-02-27T23:00:36Z) - Contrastive Continual Learning for Model Adaptability in Internet of Things [0.0]
Internet of Things (IoT) deployments operate in nonstationary, dynamic environments where factors such as sensor drift can affect application utility.<n>Continual learning (CL) addresses this by adapting models over time without catastrophic forgetting.<n>Contrastive learning has emerged as a powerful representation-learning paradigm that improves robustness and sample efficiency in a self-supervised manner.
arXiv Detail & Related papers (2026-02-04T18:59:14Z) - Multi-Agent Collaborative Intrusion Detection for Low-Altitude Economy IoT: An LLM-Enhanced Agentic AI Framework [60.72591149679355]
The rapid expansion of low-altitude economy Internet of Things (LAE-IoT) networks has created unprecedented security challenges.<n>Traditional intrusion detection systems fail to tackle the unique characteristics of aerial IoT environments.<n>We introduce a large language model (LLM)-enabled agentic AI framework for enhancing intrusion detection in LAE-IoT networks.
arXiv Detail & Related papers (2026-01-25T12:47:25Z) - Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization [51.12422886183246]
Large Language Models (LLMs) have developed rapidly in web services, delivering unprecedented capabilities while amplifying societal risks.<n>Existing works tend to focus on either isolated jailbreak attacks or static defenses, neglecting the dynamic interplay between evolving threats and safeguards in real-world web contexts.<n>We propose ACE-Safety, a novel framework that jointly optimize attack and defense models by seamlessly integrating two key innovative procedures.
arXiv Detail & Related papers (2025-11-24T15:23:41Z) - Adaptive Intrusion Detection for Evolving RPL IoT Attacks Using Incremental Learning [0.13999481573773068]
We investigate incremental learning as a practical and adaptive strategy for intrusion detection in RPL-based networks.<n>Our analysis highlights that incremental learning restores detection performance on new attack classes and mitigates catastrophic forgetting of previously learned threats.
arXiv Detail & Related papers (2025-11-14T16:35:48Z) - Alignment Tipping Process: How Self-Evolution Pushes LLM Agents Off the Rails [103.05296856071931]
We identify the Alignment Tipping Process (ATP), a critical post-deployment risk unique to self-evolving Large Language Model (LLM) agents.<n>ATP arises when continual interaction drives agents to abandon alignment constraints established during training in favor of reinforced, self-interested strategies.<n>Our experiments show that alignment benefits erode rapidly under self-evolution, with initially aligned models converging toward unaligned states.
arXiv Detail & Related papers (2025-10-06T14:48:39Z) - CITADEL: Continual Anomaly Detection for Enhanced Learning in IoT Intrusion Detection [9.92596575679496]
Internet of Things (IoT) is vulnerable to a wide range of cyber threats.<n>Intrusion detection systems (IDS) have been extensively studied to enhance IoT security.<n>We propose CITADEL, a self-supervised continual learning framework to extract robust representations from benign data.
arXiv Detail & Related papers (2025-08-26T21:55:26Z) - Application of Deep Reinforcement Learning for Intrusion Detection in Internet of Things: A Systematic Review [0.0]
The Internet of Things (IoT) has significantly expanded the digital landscape, interconnecting an unprecedented array of devices.<n>Traditional Intrusion Detection Systems (IDS) struggle to adapt to IoT networks' dynamic and evolving nature and threat patterns.<n>This systematic review examines the application of Deep Reinforcement Learning (DRL) to enhance IDS in IoT settings.
arXiv Detail & Related papers (2025-04-20T00:55:58Z) - Towards Resilient Federated Learning in CyberEdge Networks: Recent Advances and Future Trends [20.469263896950437]
We investigate the most recent techniques of resilient federated learning (ResFL) in CyberEdge networks.<n>We focus on joint training with agglomerative deduction and feature-oriented security mechanisms.<n>These advancements offer ultra-low latency, artificial intelligence (AI)-driven network management, and improved resilience against adversarial attacks.
arXiv Detail & Related papers (2025-04-01T23:06:45Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.