Enhancing Attack Resilience in Real-Time Systems through Variable Control Task Sampling Rates
- URL: http://arxiv.org/abs/2408.00341v2
- Date: Thu, 14 Nov 2024 11:30:49 GMT
- Title: Enhancing Attack Resilience in Real-Time Systems through Variable Control Task Sampling Rates
- Authors: Arkaprava Sain, Sunandan Adhikary, Ipsita Koley, Soumyajit Dey,
- Abstract summary: We propose a novel schedule vulnerability analysis methodology, enabling runtime switching between valid schedules for various control task sampling rates.
We present the Multi-Rate Attack-Aware Randomized Scheduling (MAARS) framework for fixed-priority schedulers, designed to reduce the success rate of timing inference attacks on real-time systems.
- Score: 2.238622204691961
- License:
- Abstract: Cyber-physical systems (CPSs) in modern real-time applications integrate numerous control units linked through communication networks, each responsible for executing a mix of real-time safety-critical and non-critical tasks. To ensure predictable timing behaviour, most safety-critical tasks are scheduled with fixed sampling periods, which supports rigorous safety and performance analyses. However, this deterministic execution can be exploited by attackers to launch inference-based attacks on safety-critical tasks. This paper addresses the challenge of preventing such timing inference or schedule-based attacks by dynamically adjusting the execution rates of safety-critical tasks while maintaining their performance. We propose a novel schedule vulnerability analysis methodology, enabling runtime switching between valid schedules for various control task sampling rates. Leveraging this approach, we present the Multi-Rate Attack-Aware Randomized Scheduling (MAARS) framework for preemptive fixed-priority schedulers, designed to reduce the success rate of timing inference attacks on real-time systems. To our knowledge, this is the first method that combines attack-aware schedule randomization with preserved control and scheduling integrity. The framework's efficacy in attack prevention is evaluated on automotive benchmarks using a Hardware-in-the-Loop (HiL) setup.
Related papers
- Constant-time Motion Planning with Anytime Refinement for Manipulation [17.543746580669662]
We propose an anytime refinement approach that works in combination with constant-time motion planners (CTMP) algorithms.
Our proposed framework, as it operates as a constant time algorithm, rapidly generates an initial solution within a user-defined time threshold.
functioning as an anytime algorithm, it iteratively refines the solution's quality within the allocated time budget.
arXiv Detail & Related papers (2023-11-01T20:40:10Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Guaranteed Dynamic Scheduling of Ultra-Reliable Low-Latency Traffic via
Conformal Prediction [72.59079526765487]
The dynamic scheduling of ultra-reliable and low-latency traffic (URLLC) in the uplink can significantly enhance the efficiency of coexisting services.
The main challenge is posed by the uncertainty in the process of URLLC packet generation.
We introduce a novel scheduler for URLLC packets that provides formal guarantees on reliability and latency irrespective of the quality of the URLLC traffic predictor.
arXiv Detail & Related papers (2023-02-15T14:09:55Z) - Active Uncertainty Reduction for Safe and Efficient Interaction
Planning: A Shielding-Aware Dual Control Approach [9.07774184840379]
We present a novel algorithmic approach to enable active uncertainty reduction for interactive motion planning based on the implicit dual control paradigm.
Our approach relies on sampling-based approximation of dynamic programming, leading to a model predictive control problem that can be readily solved by real-time gradient-based optimization methods.
arXiv Detail & Related papers (2023-02-01T01:34:48Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - An RL-Based Adaptive Detection Strategy to Secure Cyber-Physical Systems [0.0]
Increased dependence on software based control has escalated the vulnerabilities of Cyber Physical Systems.
We propose a Reinforcement Learning (RL) based framework which adaptively sets the parameters of such detectors based on experience learned from attack scenarios.
arXiv Detail & Related papers (2021-03-04T07:38:50Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z) - Probabilistic Guarantees for Safe Deep Reinforcement Learning [6.85316573653194]
Deep reinforcement learning has been successfully applied to many control tasks, but the application of such agents in safety-critical scenarios has been limited due to safety concerns.
We propose MOSAIC, an algorithm for measuring the safety of deep reinforcement learning agents in settings.
arXiv Detail & Related papers (2020-05-14T15:42:19Z) - Trajectory Optimization for Nonlinear Multi-Agent Systems using
Decentralized Learning Model Predictive Control [5.2647625557619815]
We present a decentralized minimum-time trajectory optimization scheme based on learning model predictive control for multi-agent systems with nonlinear decoupled dynamics and coupled state constraints.
Our framework results in a decentralized controller, which requires no communication between agents over each iteration of task execution, and guarantees persistent feasibility, finite-time closed-loop convergence, and non-decreasing performance of the global system over task iterations.
arXiv Detail & Related papers (2020-04-02T23:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.