An $\mathbf{L^*}$ Algorithm for Deterministic Weighted Regular Languages
- URL: http://arxiv.org/abs/2411.06228v1
- Date: Sat, 09 Nov 2024 16:17:14 GMT
- Title: An $\mathbf{L^*}$ Algorithm for Deterministic Weighted Regular Languages
- Authors: Clemente Pasti, Talu Karagöz, Anej Svete, Franz Nowak, Reda Boumasmoud, Ryan Cotterell,
- Abstract summary: We present a weighted variant of Angluin's (1987) $mathbfL*$ algorithm for learning FSAs.
We show how $mathbfL*$ directly learns a minimal automaton for the target language.
- Score: 41.871773940580105
- License:
- Abstract: Extracting finite state automata (FSAs) from black-box models offers a powerful approach to gaining interpretable insights into complex model behaviors. To support this pursuit, we present a weighted variant of Angluin's (1987) $\mathbf{L^*}$ algorithm for learning FSAs. We stay faithful to the original algorithm, devising a way to exactly learn deterministic weighted FSAs whose weights support division. Furthermore, we formulate the learning process in a manner that highlights the connection with FSA minimization, showing how $\mathbf{L^*}$ directly learns a minimal automaton for the target language.
Related papers
- LLMs as Probabilistic Minimally Adequate Teachers for DFA Learning [11.037017229299607]
The emergence of intelligence in large language models (LLMs) has inspired investigations into their integration into automata learning.
This paper introduces the probabilistic Minimally Adequate Teacher (pMAT) formulation.
We develop techniques to improve answer accuracy and ensure the correctness of the learned automata.
arXiv Detail & Related papers (2024-08-06T07:12:09Z) - Can Large Language Models Play Games? A Case Study of A Self-Play
Approach [61.15761840203145]
Large Language Models (LLMs) harness extensive data from the Internet, storing a broad spectrum of prior knowledge.
Monte-Carlo Tree Search (MCTS) is a search algorithm that provides reliable decision-making solutions.
This work introduces an innovative approach that bolsters LLMs with MCTS self-play to efficiently resolve turn-based zero-sum games.
arXiv Detail & Related papers (2024-03-08T19:16:29Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Sharper Model-free Reinforcement Learning for Average-reward Markov
Decision Processes [21.77276136591518]
We develop provably efficient model-free reinforcement learning (RL) algorithms for Markov Decision Processes (MDPs)
In the simulator setting, we propose a model-free RL algorithm that finds an $epsilon$-optimal policy using $widetildeO left(fracSAmathrmsp(h*)epsilon2+fracS2Amathrmsp(h*)epsilon2right)$ samples.
arXiv Detail & Related papers (2023-06-28T17:43:19Z) - When are Local Queries Useful for Robust Learning? [25.832511407411637]
We study learning models where the learner is given more power through the use of local queries.
We give the first distribution-free algorithms that perform robust empirical risk minimization.
We finish by giving robust learning algorithms for halfspaces on $0,1n$ and then obtaining robustness guarantees for halfspaces in $mathbbRn$ against precision-bounded adversaries.
arXiv Detail & Related papers (2022-10-12T11:04:22Z) - Efficient $\Phi$-Regret Minimization in Extensive-Form Games via Online
Mirror Descent [49.93548413166884]
$Phi$-Hedge is a generic algorithm capable of learning a large class of equilibria for Normal-Form Games (NFGs)
We show that $Phi$-Hedge can be directly used to learn Nash Equilibria (zero-sum settings), Normal-Form Coarse Correlated Equilibria (NFCCE), and Extensive-Form Correlated Equilibria (EFCE) in EFGs.
We prove that, in those settings, the emph$Phi$-Hedge algorithms are equivalent to standard Mirror Descent (OMD) algorithms for
arXiv Detail & Related papers (2022-05-30T17:58:06Z) - An Operator Splitting View of Federated Learning [23.99238431431463]
In the past few years, the learning ($texttFL$) community has witnessed a proliferation of new $texttFL$ algorithms.
We compare different algorithms with ease, to previous convergence results and to uncover new algorithmic variants.
The unification algorithms also leads a way to accelerate $texttFL$ algorithms, without any overhead.
arXiv Detail & Related papers (2021-08-12T21:22:06Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.