Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
- URL: http://arxiv.org/abs/2509.10034v2
- Date: Tue, 23 Sep 2025 06:22:48 GMT
- Title: Symbolic Feedforward Networks for Probabilistic Finite Automata: Exact Simulation and Learnability
- Authors: Sahil Rajesh Dhayalkar,
- Abstract summary: We show that probabilistic finite automata can be exactly simulated using symbolic feedforward neural networks.<n>We show that these symbolic simulators are not only expressive but learnable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a formal and constructive theory showing that probabilistic finite automata (PFAs) can be exactly simulated using symbolic feedforward neural networks. Our architecture represents state distributions as vectors and transitions as stochastic matrices, enabling probabilistic state propagation via matrix-vector products. This yields a parallel, interpretable, and differentiable simulation of PFA dynamics using soft updates-without recurrence. We formally characterize probabilistic subset construction, $\varepsilon$-closure, and exact simulation via layered symbolic computation, and prove equivalence between PFAs and specific classes of neural networks. We further show that these symbolic simulators are not only expressive but learnable: trained with standard gradient descent-based optimization on labeled sequence data, they recover the exact behavior of ground-truth PFAs. This learnability, formalized in Proposition 5.1, is the crux of this work. Our results unify probabilistic automata theory with neural architectures under a rigorous algebraic framework, bridging the gap between symbolic computation and deep learning.
Related papers
- VaSST: Variational Inference for Symbolic Regression using Soft Symbolic Trees [2.6521352889229446]
We introduce VaSST, a scalable probabilistic framework for symbolic regression based on variational inference.<n>VaSST achieves superior performance in both structural recovery and predictive accuracy compared to state-of-the-art symbolic regression methods.
arXiv Detail & Related papers (2026-02-27T00:07:31Z) - Neural Networks as Universal Finite-State Machines: A Constructive Feedforward Simulation Framework for NFAs [0.0]
This work establishes a new bridge between symbolic automata theory and modern neural architectures.<n>We show that feedforward networks can perform precise, interpretable, and trainable symbolic computation.
arXiv Detail & Related papers (2025-05-30T01:18:35Z) - Neural Networks as Universal Finite-State Machines: A Constructive Deterministic Finite Automaton Theory [0.0]
We establish feedforward neural networks as universal finite-state machines (N-FSMs)<n>Our results prove that finite-depth ReLU and threshold networks can exactly simulate deterministic finite automata (DFAs)<n>We formalize the expressivity boundary: fixed-depth feedforward networks cannot recognize non-regular languages requiring memory.
arXiv Detail & Related papers (2025-05-16T21:01:34Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Learning Stochastic Dynamics with Statistics-Informed Neural Network [0.4297070083645049]
We introduce a machine-learning framework named statistics-informed neural network (SINN) for learning dynamics from data.
We devise mechanisms for training the neural network model to reproduce the correct emphstatistical behavior of a target process.
We show that the obtained reduced-order model can be trained on temporally coarse-grained data and hence is well suited for rare-event simulations.
arXiv Detail & Related papers (2022-02-24T18:21:01Z) - Amortised Likelihood-free Inference for Expensive Time-series Simulators
with Signatured Ratio Estimation [1.675857332621569]
Simulation models of complex dynamics in the natural and social sciences commonly lack a tractable likelihood function.
Recent advances in machine learning have introduced novel algorithms for estimating otherwise intractable likelihood functions.
We propose a kernel classifier for sequential data using path signatures based on the recently introduced signature kernel.
arXiv Detail & Related papers (2022-02-23T15:59:34Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Formal Synthesis of Lyapunov Neural Networks [61.79595926825511]
We propose an automatic and formally sound method for synthesising Lyapunov functions.
We employ a counterexample-guided approach where a numerical learner and a symbolic verifier interact to construct provably correct Lyapunov neural networks.
Our method synthesises Lyapunov functions faster and over wider spatial domains than the alternatives, yet providing stronger or equal guarantees.
arXiv Detail & Related papers (2020-03-19T17:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.