Extending AALpy with Passive Learning: A Generalized State-Merging Approach
- URL: http://arxiv.org/abs/2506.06333v2
- Date: Thu, 12 Jun 2025 07:46:40 GMT
- Title: Extending AALpy with Passive Learning: A Generalized State-Merging Approach
- Authors: Benjamin von Berg, Bernhard K. Aichernig,
- Abstract summary: AALpy is a well-established open-source automata learning library written in Python.<n>We describe how to define and execute state-merging algorithms using AALpy.
- Score: 1.179136493190695
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AALpy is a well-established open-source automata learning library written in Python with a focus on active learning of systems with IO behavior. It provides a wide range of state-of-the-art algorithms for different automaton types ranging from fully deterministic to probabilistic automata. In this work, we present the recent addition of a generalized implementation of an important method from the domain of passive automata learning: state-merging in the red-blue framework. Using a common internal representation for different automaton types allows for a general and highly configurable implementation of the red-blue framework. We describe how to define and execute state-merging algorithms using AALpy, which reduces the implementation effort for state-merging algorithms mainly to the definition of compatibility criteria and scoring. This aids the implementation of both existing and novel algorithms. In particular, defining some existing state-merging algorithms from the literature with AALpy only takes a few lines of code.
Related papers
- AlgOS: Algorithm Operating System [2.5352713493505785]
AlgOS is an unopinionated, modular framework for algorithmic implementations.<n>It is designed to reduce the overhead of implementing new algorithms and to standardise the comparison of algorithms.
arXiv Detail & Related papers (2025-04-07T10:36:46Z) - AutoAL: Automated Active Learning with Differentiable Query Strategy Search [18.23964720426325]
This work presents the first differentiable active learning strategy search method, named AutoAL.<n>SearchNet and FitNet are co-optimized using labeled data, learning how well a set of candidate AL algorithms perform on that task.<n>AutoAL consistently achieves superior accuracy compared to all candidate AL algorithms and other selective AL approaches.
arXiv Detail & Related papers (2024-10-17T17:59:09Z) - State Matching and Multiple References in Adaptive Active Automata Learning [1.3430516723882608]
State matching is the main ingredient of adaptive L#, a novel framework for adaptive learning.
Our empirical evaluation shows that adaptive L# improves the state of the art by up to two orders of magnitude.
arXiv Detail & Related papers (2024-06-28T07:56:35Z) - Unified Functional Hashing in Automatic Machine Learning [58.77232199682271]
We show that large efficiency gains can be obtained by employing a fast unified functional hash.
Our hash is "functional" in that it identifies equivalent candidates even if they were represented or coded differently.
We show dramatic improvements on multiple AutoML domains, including neural architecture search and algorithm discovery.
arXiv Detail & Related papers (2023-02-10T18:50:37Z) - Open-Set Automatic Target Recognition [52.27048031302509]
Automatic Target Recognition (ATR) is a category of computer vision algorithms which attempts to recognize targets on data obtained from different sensors.
Existing ATR algorithms are developed for traditional closed-set methods where training and testing have the same class distribution.
We propose an Open-set Automatic Target Recognition framework where we enable open-set recognition capability for ATR algorithms.
arXiv Detail & Related papers (2022-11-10T21:28:24Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Language Inference with Multi-head Automata through Reinforcement
Learning [0.0]
Six different languages are formulated as reinforcement learning problems.
Agents are modeled as simple multi-head automaton.
Genetic algorithm performs better than Q-learning algorithm in general.
arXiv Detail & Related papers (2020-10-20T09:11:54Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z) - Discovering Reinforcement Learning Algorithms [53.72358280495428]
Reinforcement learning algorithms update an agent's parameters according to one of several possible rules.
This paper introduces a new meta-learning approach that discovers an entire update rule.
It includes both 'what to predict' (e.g. value functions) and 'how to learn from it' by interacting with a set of environments.
arXiv Detail & Related papers (2020-07-17T07:38:39Z) - OPFython: A Python-Inspired Optimum-Path Forest Classifier [68.8204255655161]
This paper proposes a Python-based Optimum-Path Forest framework, denoted as OPFython.
As OPFython is a Python-based library, it provides a more friendly environment and a faster prototyping workspace than the C language.
arXiv Detail & Related papers (2020-01-28T15:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.