From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks
- URL: http://arxiv.org/abs/2509.23912v1
- Date: Sun, 28 Sep 2025 14:32:42 GMT
- Title: From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks
- Authors: Ouns El Harzli, Bernardo Cuenca Grau, Artur d'Avila Garcez, Ian Horrocks, Tarek R. Besold,
- Abstract summary: Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language.<n>Fibring of neural networks was introduced as a neurosymbolic framework for combining learning and reasoning in neural networks.
- Score: 17.474679381815026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language with common semantics, characterized by fibred models. Inspired by this formalism, fibring of neural networks was introduced as a neurosymbolic framework for combining learning and reasoning in neural networks. Fibring of neural networks uses the (pre-)activations of a trained network to evaluate a fibring function computing the weights of another network whose outputs are injected back into the original network. However, the exact correspondence between fibring of neural networks and fibring of modal logics was never formally established. In this paper, we close this gap by formalizing the idea of fibred models \emph{compatible} with fibred neural networks. Using this correspondence, we then derive non-uniform logical expressiveness results for Graph Neural Networks (GNNs), Graph Attention Networks (GATs) and Transformer encoders. Longer-term, the goal of this paper is to open the way for the use of fibring as a formalism for interpreting the logical theories learnt by neural networks with the tools of computational logic.
Related papers
- Dense Neural Networks are not Universal Approximators [53.27010448621372]
We show that dense neural networks do not possess universality of arbitrary continuous functions.<n>We consider ReLU neural networks subject to natural constraints on weights and input and output dimensions.
arXiv Detail & Related papers (2026-02-07T16:52:38Z) - Lecture Notes on Verifying Graph Neural Networks [10.812772606528172]
We first recall the connection between graph neural networks and logics such as first-order logic and graded modal logic.<n>We then present a modal logic in which counting modalities appear in linear inequalities in order to solve verification tasks on graph neural networks.<n>We describe an algorithm for the satisfiability problem of that logic.
arXiv Detail & Related papers (2025-10-13T16:57:20Z) - Neural Logic Networks for Interpretable Classification [3.9023554886892438]
We develop neural networks with an interpretable structure.<n>We generalize these networks with NOT operations and biases that take into account unobserved data.<n>Our method improves the state-of-the-art in Boolean networks discovery and is able to learn relevant, interpretable rules.
arXiv Detail & Related papers (2025-08-11T16:49:56Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Conditional computation in neural networks: principles and research trends [48.14569369912931]
This article summarizes principles and ideas from the emerging area of applying textitconditional computation methods to the design of neural networks.
In particular, we focus on neural networks that can dynamically activate or de-activate parts of their computational graph conditionally on their input.
arXiv Detail & Related papers (2024-03-12T11:56:38Z) - Web Neural Network with Complete DiGraphs [8.2727500676707]
Current neural networks have structures that vaguely mimic the brain structure, such as neurons, convolutions, and recurrence.
The model proposed in this paper adds additional structural properties by introducing cycles into the neuron connections and removing the sequential nature commonly seen in other network layers.
Furthermore, the model has continuous input and output, inspired by spiking neural networks, which allows the network to learn a process of classification, rather than simply returning the final result.
arXiv Detail & Related papers (2024-01-07T05:12:10Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Interpretable Fault Diagnosis of Rolling Element Bearings with Temporal
Logic Neural Network [11.830457329372283]
This paper proposes a novel neural network structure, called temporal logic neural network (TLNN)
TLNN keeps the nice properties of traditional neuron networks but also provides a formal interpretation of itself with formal language.
arXiv Detail & Related papers (2022-04-15T11:54:30Z) - Optimal Approximation with Sparse Neural Networks and Applications [0.0]
We use deep sparsely connected neural networks to measure the complexity of a function class in $L(mathbb Rd)$.
We also introduce representation system - a countable collection of functions to guide neural networks.
We then analyse the complexity of a class called $beta$ cartoon-like functions using rate-distortion theory and wedgelets construction.
arXiv Detail & Related papers (2021-08-14T05:14:13Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.