RESTRAIN: Reinforcement Learning-Based Secure Framework for Trigger-Action IoT Environment
- URL: http://arxiv.org/abs/2503.09513v1
- Date: Wed, 12 Mar 2025 16:23:14 GMT
- Title: RESTRAIN: Reinforcement Learning-Based Secure Framework for Trigger-Action IoT Environment
- Authors: Md Morshed Alam, Lokesh Chandra Das, Sandip Roy, Sachin Shetty, Weichao Wang,
- Abstract summary: Internet of Things (IoT) platforms with trigger-action capability allow event conditions to trigger actions autonomously.<n> Adversaries exploit this chain of interactions to maliciously inject fake event conditions into IoT hubs.<n>We propose a platform-independent multi-agent online defense system, namely RESTRAIN, to counter remote injection attacks at runtime.
- Score: 5.509614283385528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Internet of Things (IoT) platforms with trigger-action capability allow event conditions to trigger actions in IoT devices autonomously by creating a chain of interactions. Adversaries exploit this chain of interactions to maliciously inject fake event conditions into IoT hubs, triggering unauthorized actions on target IoT devices to implement remote injection attacks. Existing defense mechanisms focus mainly on the verification of event transactions using physical event fingerprints to enforce the security policies to block unsafe event transactions. These approaches are designed to provide offline defense against injection attacks. The state-of-the-art online defense mechanisms offer real-time defense, but extensive reliability on the inference of attack impacts on the IoT network limits the generalization capability of these approaches. In this paper, we propose a platform-independent multi-agent online defense system, namely RESTRAIN, to counter remote injection attacks at runtime. RESTRAIN allows the defense agent to profile attack actions at runtime and leverages reinforcement learning to optimize a defense policy that complies with the security requirements of the IoT network. The experimental results show that the defense agent effectively takes real-time defense actions against complex and dynamic remote injection attacks and maximizes the security gain with minimal computational overhead.
Related papers
- DoomArena: A framework for Testing AI Agents Against Evolving Security Threats [84.94654617852322]
We present DoomArena, a security evaluation framework for AI agents.
It is a plug-in framework and integrates easily into realistic agentic frameworks.
It is modular and decouples the development of attacks from details of the environment in which the agent is deployed.
arXiv Detail & Related papers (2025-04-18T20:36:10Z) - Intelligent IoT Attack Detection Design via ODLLM with Feature Ranking-based Knowledge Base [0.964942474860411]
Internet of Things (IoT) devices have introduced significant cybersecurity challenges.
Traditional machine learning (ML) techniques often fall short in detecting such attacks due to the complexity of blended and evolving patterns.
We propose a novel framework leveraging On-Device Large Language Models (ODLLMs) augmented with fine-tuning and knowledge base (KB) integration for intelligent IoT network attack detection.
arXiv Detail & Related papers (2025-03-27T16:41:57Z) - Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.
We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - Enhancing Network Security Management in Water Systems using FM-based Attack Attribution [43.48086726793515]
We propose a novel model-agnostic Factorization Machines (FM)-based approach that capitalizes on water system sensor-actuator interactions to provide granular explanations and attributions for cyber attacks.<n>In multi-feature cyber attack scenarios involving intricate sensor-actuator interactions, our FM-based attack attribution method effectively ranks attack root causes, achieving approximately 20% average improvement over SHAP and LEMNA.
arXiv Detail & Related papers (2025-03-03T06:52:00Z) - Smart Grid Security: A Verified Deep Reinforcement Learning Framework to Counter Cyber-Physical Attacks [2.159496955301211]
Smart grids are vulnerable to strategically crafted cyber-physical attacks.
Malicious attacks can manipulate power demands using high-wattage Internet of Things (IoT) botnet devices.
Grid operators overlook potential scenarios of cyber-physical attacks during their design phase.
We propose a safe Deep Reinforcement Learning (DRL)-based framework for mitigating attacks on smart grids.
arXiv Detail & Related papers (2024-09-24T05:26:20Z) - IoTWarden: A Deep Reinforcement Learning Based Real-time Defense System to Mitigate Trigger-action IoT Attacks [3.1449061818799615]
We build a reinforcement learning based real-time defense system for injection attacks.
Our experiments show that the proposed mechanism can effectively and accurately identify and defend against injection attacks with reasonable overhead.
arXiv Detail & Related papers (2024-01-16T06:25:56Z) - Classification of cyber attacks on IoT and ubiquitous computing devices [49.1574468325115]
This paper provides a classification of IoT malware.
Major targets and used exploits for attacks are identified and referred to the specific malware.
The majority of current IoT attacks continue to be of comparably low effort and level of sophistication and could be mitigated by existing technical measures.
arXiv Detail & Related papers (2023-12-01T16:10:43Z) - Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications [58.06882713631082]
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
arXiv Detail & Related papers (2023-11-19T00:47:17Z) - CyberForce: A Federated Reinforcement Learning Framework for Malware Mitigation [6.22761577977019]
CyberForce is a framework that combines Federated and Reinforcement Learning (FRL) to learn suitable MTD techniques for mitigating zero-day attacks.
Experiments show that CyberForce learns the MTD technique mitigating each attack faster than existing RL-based centralized approaches.
Different aggregation algorithms used during the agent learning process provide CyberForce with notable robustness to malicious attacks.
arXiv Detail & Related papers (2023-08-11T07:25:12Z) - HoneyIoT: Adaptive High-Interaction Honeypot for IoT Devices Through
Reinforcement Learning [10.186372780116631]
We develop an adaptive high-interaction honeypot for IoT devices, called HoneyIoT.
We first build a real device based attack trace collection system to learn how attackers interact with IoT devices.
We then model the attack behavior through markov decision process and leverage reinforcement learning techniques to learn the best responses to engage attackers.
arXiv Detail & Related papers (2023-05-10T19:43:20Z) - ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks [18.819756176569033]
IoT devices control functions in smart homes and buildings, smart cities, and smart factories.
Existing approaches for detecting attacks are mostly limited to attacks directly compromising individual IoT devices.
We propose ARGUS, the first self-learning intrusion detection system for detecting contextual attacks on IoT environments.
arXiv Detail & Related papers (2023-02-15T11:05:45Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.