The Bright Side of Timed Opacity
- URL: http://arxiv.org/abs/2408.12240v3
- Date: Fri, 27 Sep 2024 14:00:11 GMT
- Title: The Bright Side of Timed Opacity
- Authors: Étienne André, Sarah Dépernet, Engel Lefaucheux,
- Abstract summary: We show that opacity can mostly be retrieved, except for one-action TAs and for one-clock TAs with $epsilon$-transitions.
We then exhibit a new decidable subclass in which the number of observations made by the attacker is limited.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In 2009, Franck Cassez showed that the timed opacity problem, where an attacker can observe some actions with their timestamps and attempts to deduce information, is undecidable for timed automata (TAs). Moreover, he showed that the undecidability holds even for subclasses such as event-recording automata. In this article, we consider the same definition of opacity for several other subclasses of TAs: with restrictions on the number of clocks, of actions, on the nature of time, or on a new subclass called observable event-recording automata. We show that opacity can mostly be retrieved, except for one-action TAs and for one-clock TAs with $\epsilon$-transitions, for which undecidability remains. We then exhibit a new decidable subclass in which the number of observations made by the attacker is limited.
Related papers
- Relativistic limits on the discretization and temporal resolution of a quantum clock [0.0]
We discuss limits on the discretization and temporal resolution of time values in a quantum clock.<n>Our clock is characterized by a time observable chosen to be the complement of a bounded and discrete Hamiltonian.
arXiv Detail & Related papers (2025-04-08T09:27:41Z) - Execution-time opacity control for timed automata [0.0]
Timing leaks in timed automata can occur whenever an attacker is able to deduce a secret by observing some timed behavior.
In execution-time opacity, the attacker aims at deducing whether a private location was visited, by observing only the execution time.
We show that we are able to decide whether a TA can be controlled at runtime to ensure opacity.
arXiv Detail & Related papers (2024-09-16T14:46:52Z) - Expiring opacity problems in parametric timed automata [0.0]
We study expiring timed opacity problems in timed automata.
We consider the set of time bounds for which a system is opaque and show when they can be effectively computed for timed automata.
arXiv Detail & Related papers (2024-03-12T13:30:53Z) - Few-shot Learner Parameterization by Diffusion Time-steps [133.98320335394004]
Few-shot learning is still challenging when using large multi-modal foundation models.
We propose Time-step Few-shot (TiF) learner to make up for lost attributes.
TiF learner significantly outperforms OpenCLIP and its adapters on a variety of fine-grained and customized few-shot learning tasks.
arXiv Detail & Related papers (2024-03-05T04:38:13Z) - Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters [67.28751868277611]
Recent work has demonstrated ability to customize text-to-image diffusion models to multiple, fine-grained concepts in a sequential manner.
We show that capacity to learn new tasks reaches saturation over longer sequences.
We introduce a novel method, STack-And-Mask INcremental Adapters (STAMINA), which is composed of low-ranked attention-masked adapters and customized tokens.
arXiv Detail & Related papers (2023-11-30T18:04:21Z) - Configuring Timing Parameters to Ensure Execution-Time Opacity in Timed
Automata [2.2003515924552044]
Timed automata are an extension of finite-state automata with a set of clocks evolving linearly.
We use timed automata as the input formalism, in which we assume that the attacker has access only to the system execution time.
arXiv Detail & Related papers (2023-10-31T12:10:35Z) - Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos [63.94040814459116]
Self-supervised methods have shown remarkable progress in learning high-level semantics and low-level temporal correspondence.
We propose a novel semantic-aware masked slot attention on top of the fused semantic features and correspondence maps.
We adopt semantic- and instance-level temporal consistency as self-supervision to encourage temporally coherent object-centric representations.
arXiv Detail & Related papers (2023-08-19T09:12:13Z) - Prune Spatio-temporal Tokens by Semantic-aware Temporal Accumulation [89.88214896713846]
STA score considers two critical factors: temporal redundancy and semantic importance.
We apply the STA module to off-the-shelf video Transformers and Videowins.
Results: Kinetics-400 and Something-Something V2 achieve 30% overshelf reduction with a negligible 0.2% accuracy drop.
arXiv Detail & Related papers (2023-08-08T19:38:15Z) - Which Features are Learnt by Contrastive Learning? On the Role of
Simplicity Bias in Class Collapse and Feature Suppression [59.97965005675144]
Contrastive learning (CL) has emerged as a powerful technique for representation learning, with or without label supervision.
We provide the first unified theoretically rigorous framework to determine textitwhich features are learnt by CL.
We present increasing embedding dimensionality and improving the quality of data augmentations as two theoretically motivated solutions.
arXiv Detail & Related papers (2023-05-25T23:37:22Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - A Generalized & Robust Framework For Timestamp Supervision in Temporal
Action Segmentation [79.436224998992]
In temporal action segmentation, Timestamp supervision requires only a handful of labelled frames per video sequence.
We propose a novel Expectation-Maximization based approach that leverages the label uncertainty of unlabelled frames.
Our proposed method produces SOTA results and even exceeds the fully-supervised setup in several metrics and datasets.
arXiv Detail & Related papers (2022-07-20T18:30:48Z) - Star Temporal Classification: Sequence Classification with Partially
Labeled Data [31.98593136313469]
We develop an algorithm which can learn from partially labeled and unsegmented sequential data.
We use a special star token to allow alignments which include all possible tokens whenever a token could be missing.
We also perform experiments in handwriting recognition to show that our method easily applies to other sequence classification tasks.
arXiv Detail & Related papers (2022-01-28T16:03:17Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Exploring Visual Context for Weakly Supervised Person Search [155.46727990750227]
Person search has recently emerged as a challenging task that jointly addresses pedestrian detection and person re-identification.
Existing approaches follow a fully supervised setting where both bounding box and identity annotations are available.
This paper inventively considers weakly supervised person search with only bounding box annotations.
arXiv Detail & Related papers (2021-06-19T14:47:13Z) - Self-Supervised Learning for Semi-Supervised Temporal Action Proposal [42.6254639252739]
We design an effective Self-supervised Semi-supervised Temporal Action Proposal (SSTAP) framework.
The SSTAP contains two crucial branches, i.e., temporal-aware semi-supervised branch and relation-aware self-supervised branch.
We extensively evaluate the proposed SSTAP on THUMOS14 and ActivityNet v1.3 datasets.
arXiv Detail & Related papers (2021-04-07T16:03:25Z) - CLTA: Contents and Length-based Temporal Attention for Few-shot Action
Recognition [2.0349696181833337]
We propose a Contents and Length-based Temporal Attention model, which learns customized temporal attention for the individual video.
We show that even a not fine-tuned backbone with an ordinary softmax classifier can still achieve similar or better results compared to the state-of-the-art few-shot action recognition.
arXiv Detail & Related papers (2021-03-18T23:40:28Z) - Unsupervised Learning on Monocular Videos for 3D Human Pose Estimation [121.5383855764944]
We use contrastive self-supervised learning to extract rich latent vectors from single-view videos.
We show that applying CSS only to the time-variant features, while also reconstructing the input and encouraging a gradual transition between nearby and away features, yields a rich latent space.
Our approach outperforms other unsupervised single-view methods and matches the performance of multi-view techniques.
arXiv Detail & Related papers (2020-12-02T20:27:35Z) - Uncertainty-Aware Weakly Supervised Action Detection from Untrimmed
Videos [82.02074241700728]
In this paper, we present a prohibitive-level action recognition model that is trained with only video-frame labels.
Our method per person detectors have been trained on large image datasets within Multiple Instance Learning framework.
We show how we can apply our method in cases where the standard Multiple Instance Learning assumption, that each bag contains at least one instance with the specified label, is invalid.
arXiv Detail & Related papers (2020-07-21T10:45:05Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Active learning of timed automata with unobservable resets [0.5801044612920815]
Active learning of timed languages is concerned with the inference of timed automata from observed words.
The major difficulty of this framework is the inference of clock resets, central to the dynamics of timed automata.
We generalize this framework to a new class, called reset-free event-recording automata, where some transitions may reset no clocks.
arXiv Detail & Related papers (2020-07-03T12:13:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.