Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic
- URL: http://arxiv.org/abs/2603.05198v1
- Date: Thu, 05 Mar 2026 14:08:25 GMT
- Title: Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic
- Authors: Sara Candussio, Gabriele Sarti, Gaia Saveri, Luca Bortolussi,
- Abstract summary: We introduce a framework for learning continuous neural representations of formal specifications.<n>We distill a symbolic robustness kernel into a Transformer encoder.<n>The encoder produces embeddings in a single forward pass, effectively mimicking the kernel's logic at a fraction of its computational cost.
- Score: 6.419602857618508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a framework for learning continuous neural representations of formal specifications by distilling the geometry of their semantics into a latent space. Existing approaches rely either on symbolic kernels -- which preserve behavioural semantics but are computationally prohibitive, anchor-dependent, and non-invertible -- or on syntax-based neural embeddings that fail to capture underlying structures. Our method bridges this gap: using a teacher-student setup, we distill a symbolic robustness kernel into a Transformer encoder. Unlike standard contrastive methods, we supervise the model with a continuous, kernel-weighted geometric alignment objective that penalizes errors in proportion to their semantic discrepancies. Once trained, the encoder produces embeddings in a single forward pass, effectively mimicking the kernel's logic at a fraction of its computational cost. We apply our framework to Signal Temporal Logic (STL), demonstrating that the resulting neural representations faithfully preserve the semantic similarity of STL formulae, accurately predict robustness and constraint satisfaction, and remain intrinsically invertible. Our proposed approach enables highly efficient, scalable neuro-symbolic reasoning and formula reconstruction without repeated kernel computation at runtime.
Related papers
- Weights to Code: Extracting Interpretable Algorithms from the Discrete Transformer [65.38883376379812]
We propose the Discrete Transformer, an architecture engineered to bridge the gap between continuous representations and discrete symbolic logic.<n> Empirically, the Discrete Transformer not only achieves performance comparable to RNN-based baselines but crucially extends interpretability to continuous variable domains.
arXiv Detail & Related papers (2026-01-09T12:49:41Z) - A Unified Cortical Circuit Model with Divisive Normalization and Self-Excitation for Robust Representation and Memory Maintenance [2.705743343600661]
We introduce a recurrent neural circuit that combines divisive normalization with self-excitation to achieve robust encoding.<n>We demonstrate the model's versatility in two canonical tasks.<n>This work establishes a unified mathematical framework that bridges noise suppression, working memory, and approximate Bayesian inference.
arXiv Detail & Related papers (2025-08-18T08:00:24Z) - Quantum Spectral Reasoning: A Non-Neural Architecture for Interpretable Machine Learning [0.0]
We propose a novel machine learning architecture that departs from conventional neural network paradigms.<n>We use quantum spectral methods, specifically Pade approximants and the Lanczos algorithm, for interpretable signal analysis and symbolic reasoning.<n>Our results show that this spectral-symbolic architecture achieves competitive accuracy while maintaining interpretability and data efficiency.
arXiv Detail & Related papers (2025-08-05T07:16:45Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - Formally Verified Neurosymbolic Trajectory Learning via Tensor-based Linear Temporal Logic on Finite Traces [0.0]
We present a novel formalisation of tensor semantics for linear temporal logic on finite traces (LTLf)<n>We demonstrate that this formalisation can be integrated into a neurosymbolic learning process by defining and verifying a differentiable loss function for the constraints.<n>We show that, by using this loss, the process learns to satisfy pre-specified logical constraints.
arXiv Detail & Related papers (2025-01-23T14:43:12Z) - stl2vec: Semantic and Interpretable Vector Representation of Temporal Logic [0.5956301166481089]
We propose a semantically grounded vector representation (feature embedding) of logic formulae.
We compute continuous embeddings of formulae with several desirable properties.
We demonstrate the efficacy of the approach in two tasks: learning model checking and neurosymbolic framework.
arXiv Detail & Related papers (2024-05-23T10:04:56Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Towards Invertible Semantic-Preserving Embeddings of Logical Formulae [1.0152838128195467]
Learning and optimising logic requirements and rules has always been an important problem in Artificial Intelligence.
Current methods are able to construct effective semantic-preserving embeddings via kernel methods, but the map they define is not invertible.
In this work we address this problem, learning how to invert such an embedding leveraging deep architectures based on the Graph Variational Autoencoder framework.
arXiv Detail & Related papers (2023-05-03T10:49:01Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.