Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER)
- URL: http://arxiv.org/abs/2312.11651v1
- Date: Mon, 18 Dec 2023 19:06:00 GMT
- Title: Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER)
- Authors: Fadi Al Machot
- Abstract summary: This paper introduces an approach designed to improve the performance of neural models in learning reasoning tasks.
It achieves this by integrating Answer Set Programming solvers and domain-specific expertise.
The model shows a significant improvement in solving Sudoku puzzles using only 12 puzzles for training and testing.
- Score: 0.13053649021965597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural-symbolic learning, an intersection of neural networks and symbolic
reasoning, aims to blend neural networks' learning capabilities with symbolic
AI's interpretability and reasoning. This paper introduces an approach designed
to improve the performance of neural models in learning reasoning tasks. It
achieves this by integrating Answer Set Programming (ASP) solvers and
domain-specific expertise, which is an approach that diverges from traditional
complex neural-symbolic models. In this paper, a shallow artificial neural
network (ANN) is specifically trained to solve Sudoku puzzles with minimal
training data. The model has a unique loss function that integrates losses
calculated using the ASP solver outputs, effectively enhancing its training
efficiency. Most notably, the model shows a significant improvement in solving
Sudoku puzzles using only 12 puzzles for training and testing without
hyperparameter tuning. This advancement indicates that the model's enhanced
reasoning capabilities have practical applications, extending well beyond
Sudoku puzzles to potentially include a variety of other domains. The code can
be found on GitHub: https://github.com/Fadi2200/ASPEN.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Are Deep Neural Networks SMARTer than Second Graders? [85.60342335636341]
We evaluate the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed for children in the 6--8 age group.
Our dataset consists of 101 unique puzzles; each puzzle comprises a picture question, and their solution needs a mix of several elementary skills, including arithmetic, algebra, and spatial reasoning.
Experiments reveal that while powerful deep models offer reasonable performances on puzzles in a supervised setting, they are not better than random accuracy when analyzed for generalization.
arXiv Detail & Related papers (2022-12-20T04:33:32Z) - Experimental study of Neural ODE training with adaptive solver for
dynamical systems modeling [72.84259710412293]
Some ODE solvers called adaptive can adapt their evaluation strategy depending on the complexity of the problem at hand.
This paper describes a simple set of experiments to show why adaptive solvers cannot be seamlessly leveraged as a black-box for dynamical systems modelling.
arXiv Detail & Related papers (2022-11-13T17:48:04Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - End-to-End Neuro-Symbolic Architecture for Image-to-Image Reasoning
Tasks [15.649929244635269]
We study neural-symbolic-neural models for reasoning tasks that require a conversion from an image input to an image output.
We propose NSNnet, an architecture that combines an image reconstruction loss with a novel output encoder to generate a supervisory signal.
arXiv Detail & Related papers (2021-06-06T13:27:33Z) - Extending Answer Set Programs with Neural Networks [2.512827436728378]
We propose NeurASP -- a simple extension of answer set programs by embracing neural networks.
We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules.
arXiv Detail & Related papers (2020-09-22T00:52:30Z) - ODEN: A Framework to Solve Ordinary Differential Equations using
Artificial Neural Networks [0.0]
We prove a specific loss function, which does not require knowledge of the exact solution, to evaluate neural networks' performance.
Neural networks are shown to be proficient at approximating continuous solutions within their training domains.
A user-friendly and adaptable open-source code (ODE$mathcalN$) is provided on GitHub.
arXiv Detail & Related papers (2020-05-28T15:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.