Execution-time opacity control for timed automata
- URL: http://arxiv.org/abs/2409.10336v1
- Date: Mon, 16 Sep 2024 14:46:52 GMT
- Title: Execution-time opacity control for timed automata
- Authors: Étienne André, Marie Duflot, Laetitia Laversa, Engel Lefaucheux,
- Abstract summary: Timing leaks in timed automata can occur whenever an attacker is able to deduce a secret by observing some timed behavior.
In execution-time opacity, the attacker aims at deducing whether a private location was visited, by observing only the execution time.
We show that we are able to decide whether a TA can be controlled at runtime to ensure opacity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Timing leaks in timed automata (TA) can occur whenever an attacker is able to deduce a secret by observing some timed behavior. In execution-time opacity, the attacker aims at deducing whether a private location was visited, by observing only the execution time. It can be decided whether a TA is opaque in this setting. In this work, we tackle control, and show that we are able to decide whether a TA can be controlled at runtime to ensure opacity. Our method is constructive, in the sense that we can exhibit such a controller. We also address the case when the attacker cannot have an infinite precision in its observations.
Related papers
- Patrol Security Game: Defending Against Adversary with Freedom in Attack Timing, Location, and Duration [4.765278970103286]
Patrol Security Game (PSG) is a robotic patrolling problem modeled as an extensive-form deterministic Stackelberg problem.
Our objective is to devise a synthetic schedule that minimizes the attacker's time horizon.
arXiv Detail & Related papers (2024-10-21T02:53:18Z) - The Bright Side of Timed Opacity [0.0]
We show that opacity can mostly be retrieved, except for one-action TAs and for one-clock TAs with $epsilon$-transitions.
We then exhibit a new decidable subclass in which the number of observations made by the attacker is limited.
arXiv Detail & Related papers (2024-08-22T09:17:59Z) - Expiring opacity problems in parametric timed automata [0.0]
We study expiring timed opacity problems in timed automata.
We consider the set of time bounds for which a system is opaque and show when they can be effectively computed for timed automata.
arXiv Detail & Related papers (2024-03-12T13:30:53Z) - Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed
Automata [2.2003515924552044]
Timed automata are an extension of finite-state automata with a set of clocks evolving linearly.
We use timed automata as the input formalism, in which we assume that the attacker has access only to the system execution time.
arXiv Detail & Related papers (2023-10-31T12:10:35Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Actor-Critic with variable time discretization via sustained actions [0.0]
SusACER is an off-policyReinforcement learning algorithm that combines the advantages of different time discretization settings.
We analyze the effects of the changing time discretization in robotic control environments: Ant, HalfCheetah, Hopper, and Walker2D.
arXiv Detail & Related papers (2023-08-08T14:45:00Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - When to Ask for Help: Proactive Interventions in Autonomous
Reinforcement Learning [57.53138994155612]
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world.
A critical challenge is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table.
We propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them.
arXiv Detail & Related papers (2022-10-19T17:57:24Z) - On Almost-Sure Intention Deception Planning that Exploits Imperfect
Observers [24.11353445650682]
Intention deception involves computing a strategy which deceives the opponent into a wrong belief about the agent's intention or objective.
This paper studies a class of probabilistic planning problems with intention deception and investigates how a defender's limited sensing modality can be exploited.
arXiv Detail & Related papers (2022-09-01T16:38:03Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Random Projections for Adversarial Attack Detection [8.684378639046644]
adversarial attack detection remains a fundamentally challenging problem from two perspectives.
We present a technique that makes use of special properties of random projections, whereby we can characterize the behavior of clean and adversarial examples.
Performance evaluation demonstrates that our technique outperforms ($>0.92$ AUC) competing state of the art (SOTA) attack strategies.
arXiv Detail & Related papers (2020-12-11T15:02:28Z) - Learning-based attacks in Cyber-Physical Systems: Exploration,
Detection, and Control Cost trade-offs [9.453554184019108]
We study the problem of learning-based attacks in linear systems, where the communication channel between the controller and the plant can be hijacked by a malicious attacker.
We assume the attacker learns the dynamics of the system from observations, then overrides the controller's actuation signal.
We show that this bound is also order optimal, in the sense that if the attacker satisfies it, then there exists a learning algorithm with the given order expected deception time.
arXiv Detail & Related papers (2020-11-21T04:08:16Z) - Is Private Learning Possible with Instance Encoding? [68.84324434746765]
We study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism.
We formalize both the notion of instance encoding and its privacy by providing two attack models.
arXiv Detail & Related papers (2020-11-10T18:55:20Z) - Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with
Latent Confounders [62.54431888432302]
We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders.
We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data.
arXiv Detail & Related papers (2020-07-27T22:19:01Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.