Active learning of timed automata with unobservable resets
- URL: http://arxiv.org/abs/2007.01637v2
- Date: Wed, 8 Jul 2020 07:30:20 GMT
- Title: Active learning of timed automata with unobservable resets
- Authors: L\'eo Henry, Nicolas Markey, Thierry J\'eron
- Abstract summary: Active learning of timed languages is concerned with the inference of timed automata from observed words.
The major difficulty of this framework is the inference of clock resets, central to the dynamics of timed automata.
We generalize this framework to a new class, called reset-free event-recording automata, where some transitions may reset no clocks.
- Score: 0.5801044612920815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning of timed languages is concerned with the inference of timed
automata from observed timed words. The agent can query for the membership of
words in the target language, or propose a candidate model and verify its
equivalence to the target. The major difficulty of this framework is the
inference of clock resets, central to the dynamics of timed automata, but not
directly observable. Interesting first steps have already been made by
restricting to the subclass of event-recording automata, where clock resets are
tied to observations. In order to advance towards learning of general timed
automata, we generalize this method to a new class, called reset-free
event-recording automata, where some transitions may reset no clocks. This
offers the same challenges as generic timed automata while keeping the simpler
framework of event-recording automata for the sake of readability. Central to
our contribution is the notion of invalidity, and the algorithm and data
structures to deal with it, allowing on-the-fly detection and pruning of reset
hypotheses that contradict observations, a key to any efficient active-learning
procedure for generic timed automata.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Continuously Learning New Words in Automatic Speech Recognition [56.972851337263755]
We propose an self-supervised continual learning approach to recognize new words.
We use a memory-enhanced Automatic Speech Recognition model from previous work.
We show that with this approach, we obtain increasing performance on the new words when they occur more frequently.
arXiv Detail & Related papers (2024-01-09T10:39:17Z) - Mitigating the Learning Bias towards Repetition by Self-Contrastive
Training for Open-Ended Generation [92.42032403795879]
We show that pretrained language models (LMs) such as GPT2 still tend to generate repetitive texts.
We attribute their overestimation of token-level repetition probabilities to the learning bias.
We find that LMs use longer-range dependencies to predict repetitive tokens than non-repetitive ones, which may be the cause of sentence-level repetition loops.
arXiv Detail & Related papers (2023-07-04T07:53:55Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - Automating Staged Rollout with Reinforcement Learning [1.3750624267664155]
Staged rollout is a strategy of incrementally releasing software updates to portions of the user population in order to accelerate defect discovery without incurring catastrophic outcomes such as system wide outages.
This paper demonstrates the potential to automate staged rollout with multi-objective reinforcement learning in order to dynamically balance stakeholder needs such as time to deliver new features and downtime incurred by failures due to latent defects.
arXiv Detail & Related papers (2022-04-01T21:22:39Z) - Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval [129.25914272977542]
RetoMaton is a weighted finite automaton built on top of the datastore.
Traversing this automaton at inference time, in parallel to the LM inference, reduces its perplexity.
arXiv Detail & Related papers (2022-01-28T21:38:56Z) - Semantic Code Classification for Automated Machine Learning [0.0]
We propose a way to control the output via a sequence of simple actions, that are called semantic code classes.
We present a semantic code classification task and discuss methods for solving this problem on the Natural Language to Machine Learning (NL2ML) dataset.
arXiv Detail & Related papers (2022-01-25T10:40:37Z) - End to End ASR System with Automatic Punctuation Insertion [0.0]
We propose a method to generate punctuated transcript for the TEDLIUM dataset using transcripts available from ted.com.
We also propose an end-to-end ASR system that outputs words and punctuations concurrently from speech signals.
arXiv Detail & Related papers (2020-12-03T15:46:43Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z) - Increasing the Inference and Learning Speed of Tsetlin Machines with
Clause Indexing [9.440900386313215]
The Tsetlin Machine (TM) is a machine learning algorithm founded on the classical Tsetlin Automaton (TA) and game theory.
We report up to 15 times faster classification and three times faster learning on MNIST and Fashion-MNIST image classification, and IMDb sentiment analysis.
arXiv Detail & Related papers (2020-04-07T08:16:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.