Execution-time opacity control for timed automata
- URL: http://arxiv.org/abs/2409.10336v1
- Date: Mon, 16 Sep 2024 14:46:52 GMT
- Title: Execution-time opacity control for timed automata
- Authors: Étienne André, Marie Duflot, Laetitia Laversa, Engel Lefaucheux,
- Abstract summary: Timing leaks in timed automata can occur whenever an attacker is able to deduce a secret by observing some timed behavior.
In execution-time opacity, the attacker aims at deducing whether a private location was visited, by observing only the execution time.
We show that we are able to decide whether a TA can be controlled at runtime to ensure opacity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Timing leaks in timed automata (TA) can occur whenever an attacker is able to deduce a secret by observing some timed behavior. In execution-time opacity, the attacker aims at deducing whether a private location was visited, by observing only the execution time. It can be decided whether a TA is opaque in this setting. In this work, we tackle control, and show that we are able to decide whether a TA can be controlled at runtime to ensure opacity. Our method is constructive, in the sense that we can exhibit such a controller. We also address the case when the attacker cannot have an infinite precision in its observations.
Related papers
- The Bright Side of Timed Opacity [0.0]
We show that opacity can mostly be retrieved, except for one-action TAs and for one-clock TAs with $epsilon$-transitions.
We then exhibit a new decidable subclass in which the number of observations made by the attacker is limited.
arXiv Detail & Related papers (2024-08-22T09:17:59Z) - Expiring opacity problems in parametric timed automata [0.0]
We study expiring timed opacity problems in timed automata.
We consider the set of time bounds for which a system is opaque and show when they can be effectively computed for timed automata.
arXiv Detail & Related papers (2024-03-12T13:30:53Z) - Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed
Automata [2.2003515924552044]
Timed automata are an extension of finite-state automata with a set of clocks evolving linearly.
We use timed automata as the input formalism, in which we assume that the attacker has access only to the system execution time.
arXiv Detail & Related papers (2023-10-31T12:10:35Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Actor-Critic with variable time discretization via sustained actions [0.0]
SusACER is an off-policyReinforcement learning algorithm that combines the advantages of different time discretization settings.
We analyze the effects of the changing time discretization in robotic control environments: Ant, HalfCheetah, Hopper, and Walker2D.
arXiv Detail & Related papers (2023-08-08T14:45:00Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - When to Ask for Help: Proactive Interventions in Autonomous
Reinforcement Learning [57.53138994155612]
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world.
A critical challenge is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table.
We propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them.
arXiv Detail & Related papers (2022-10-19T17:57:24Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Learning-based attacks in Cyber-Physical Systems: Exploration,
Detection, and Control Cost trade-offs [9.453554184019108]
We study the problem of learning-based attacks in linear systems, where the communication channel between the controller and the plant can be hijacked by a malicious attacker.
We assume the attacker learns the dynamics of the system from observations, then overrides the controller's actuation signal.
We show that this bound is also order optimal, in the sense that if the attacker satisfies it, then there exists a learning algorithm with the given order expected deception time.
arXiv Detail & Related papers (2020-11-21T04:08:16Z) - Is Private Learning Possible with Instance Encoding? [68.84324434746765]
We study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism.
We formalize both the notion of instance encoding and its privacy by providing two attack models.
arXiv Detail & Related papers (2020-11-10T18:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.