Neural logic programs and neural nets
- URL: http://arxiv.org/abs/2406.11888v1
- Date: Thu, 13 Jun 2024 19:22:04 GMT
- Title: Neural logic programs and neural nets
- Authors: Christian Antić,
- Abstract summary: We first define the answer set semantics of (boolean) neural nets and then introduce from first principles a class of neural logic programs and show that nets and programs are equivalent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural-symbolic integration aims to combine the connectionist subsymbolic with the logical symbolic approach to artificial intelligence. In this paper, we first define the answer set semantics of (boolean) neural nets and then introduce from first principles a class of neural logic programs and show that nets and programs are equivalent.
Related papers
- Lecture Notes on Verifying Graph Neural Networks [10.812772606528172]
We first recall the connection between graph neural networks and logics such as first-order logic and graded modal logic.<n>We then present a modal logic in which counting modalities appear in linear inequalities in order to solve verification tasks on graph neural networks.<n>We describe an algorithm for the satisfiability problem of that logic.
arXiv Detail & Related papers (2025-10-13T16:57:20Z) - From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks [17.474679381815026]
Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language.<n>Fibring of neural networks was introduced as a neurosymbolic framework for combining learning and reasoning in neural networks.
arXiv Detail & Related papers (2025-09-28T14:32:42Z) - Decoding Interpretable Logic Rules from Neural Networks [8.571176778812038]
We introduce NeuroLogic, a novel approach for decoding interpretable logic rules from neural networks.
NeuroLogic can be adapted to a wide range of neural networks.
We believe NeuroLogic can help pave the way for understanding the black-box nature of neural networks.
arXiv Detail & Related papers (2025-01-14T17:57:26Z) - Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.
We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.
We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Exploring knowledge graph-based neural-symbolic system from application perspective [0.0]
achieving human-like reasoning and interpretability in AI systems remains a substantial challenge.
The Neural-Symbolic paradigm, which integrates neural networks with symbolic systems, presents a promising pathway toward more interpretable AI.
This paper explores recent advancements in neural-symbolic integration based on Knowledge Graphs.
arXiv Detail & Related papers (2024-05-06T14:40:50Z) - Neural Markov Prolog [57.13568543360899]
We propose the language Neural Markov Prolog (NMP) as a means to bridge first order logic and neural network design.
NMP allows for the easy generation and presentation of architectures for images, text, relational databases, or other target data types.
arXiv Detail & Related papers (2023-11-27T21:41:47Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture [4.855957436171202]
We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
arXiv Detail & Related papers (2023-02-23T17:39:46Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z) - Neural-Symbolic Integration for Interactive Learning and Conceptual
Grounding [1.14219428942199]
We propose neural-symbolic integration for abstract concept explanation and interactive learning.
Interaction with the user confirms or rejects a revision of the neural model.
The approach is illustrated using the Logic Network framework alongside Concept Activation Vectors and applied to a Conal Neural Network.
arXiv Detail & Related papers (2021-12-22T11:24:48Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.