Learning Weighted Finite Automata over the Max-Plus Semiring and its Termination
- URL: http://arxiv.org/abs/2407.09775v1
- Date: Sat, 13 Jul 2024 05:08:06 GMT
- Title: Learning Weighted Finite Automata over the Max-Plus Semiring and its Termination
- Authors: Takamasa Okudono, Masaki Waga, Taro Sekiyama, Ichiro Hasuo,
- Abstract summary: We study an L*-style learning algorithm for weighted automata over the max-plus semiring.
We show that it can fail to maintain consistency of tables, and can thus make equivalence queries on obviously wrong hypothesis automata.
- Score: 2.024925013349319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning of finite automata has been vigorously pursued for the purposes of analysis and explanation of black-box systems. In this paper, we study an L*-style learning algorithm for weighted automata over the max-plus semiring. The max-plus setting exposes a "consistency" issue in the previously studied semiring-generic extension of L*: we show that it can fail to maintain consistency of tables, and can thus make equivalence queries on obviously wrong hypothesis automata. We present a theoretical fix by a mathematically clean notion of column-closedness. We also present a nontrivial and reasonably broad class of weighted languages over the max-plus semiring in which our algorithm terminates.
Related papers
- Learning Quantitative Automata Modulo Theories [17.33092604696224]
We present QUINTIC, an active learning algorithm, wherein the learner infers a valid automaton through deductive reasoning.
Our evaluations utilize theory of rationals in order to learn summation, discounted summation, product, and classification quantitative automata.
arXiv Detail & Related papers (2024-11-15T21:51:14Z) - An $\mathbf{L^*}$ Algorithm for Deterministic Weighted Regular Languages [41.871773940580105]
We present a weighted variant of Angluin's (1987) $mathbfL*$ algorithm for learning FSAs.
We show how $mathbfL*$ directly learns a minimal automaton for the target language.
arXiv Detail & Related papers (2024-11-09T16:17:14Z) - LLMs as Probabilistic Minimally Adequate Teachers for DFA Learning [11.037017229299607]
The emergence of intelligence in large language models (LLMs) has inspired investigations into their integration into automata learning.
This paper introduces the probabilistic Minimally Adequate Teacher (pMAT) formulation.
We develop techniques to improve answer accuracy and ensure the correctness of the learned automata.
arXiv Detail & Related papers (2024-08-06T07:12:09Z) - When can you trust feature selection? -- I: A condition-based analysis
of LASSO and generalised hardness of approximation [49.1574468325115]
We show how no (randomised) algorithm can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input.
For ill-posed inputs, the algorithm runs forever, hence, it will never produce a wrong answer.
For any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer.
arXiv Detail & Related papers (2023-12-18T18:29:01Z) - Automata Learning from Preference and Equivalence Queries [17.33092604696224]
We propose a novel variant of the active automata learning problem: actively learn finite automata using preference queries.
ReMAP is guaranteed to correctly infer the minimal complexity with query complexity under exact equivalence queries.
Our empirical evaluations indicate REMAP scales to large automata is effective at learning correct automata from consistent teachers.
arXiv Detail & Related papers (2023-08-18T04:49:45Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Quantum Finite Automata and Quiver Algebras [0.0]
We reformulate quantum finite automata with multiple-time measurements using the notion of near-ring.
This gives a unified understanding towards quantum computing and deep learning.
arXiv Detail & Related papers (2022-03-15T02:12:13Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z) - Efficient Optimistic Exploration in Linear-Quadratic Regulators via
Lagrangian Relaxation [107.06364966905821]
We study the exploration-exploitation dilemma in the linear quadratic regulator (LQR) setting.
Inspired by the extended value iteration algorithm used in optimistic algorithms for finite MDPs, we propose to relax the optimistic optimization of ofulq.
We show that an $epsilon$-optimistic controller can be computed efficiently by solving at most $Obig(log (1/epsilon)big)$ Riccati equations.
arXiv Detail & Related papers (2020-07-13T16:30:47Z) - Competitive Mirror Descent [67.31015611281225]
Constrained competitive optimization involves multiple agents trying to minimize conflicting objectives, subject to constraints.
We propose competitive mirror descent (CMD): a general method for solving such problems based on first order information.
As a special case we obtain a novel competitive multiplicative weights algorithm for problems on the positive cone.
arXiv Detail & Related papers (2020-06-17T22:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.