Extending Answer Set Programs with Neural Networks
- URL: http://arxiv.org/abs/2009.10256v1
- Date: Tue, 22 Sep 2020 00:52:30 GMT
- Title: Extending Answer Set Programs with Neural Networks
- Authors: Zhun Yang
- Abstract summary: We propose NeurASP -- a simple extension of answer set programs by embracing neural networks.
We show that NeurASP can not only improve the perception accuracy of a pre-trained neural network, but also help to train a neural network better by giving restrictions through logic rules.
- Score: 2.512827436728378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of low-level perception with high-level reasoning is one of
the oldest problems in Artificial Intelligence. Recently, several proposals
were made to implement the reasoning process in complex neural network
architectures. While these works aim at extending neural networks with the
capability of reasoning, a natural question that we consider is: can we extend
answer set programs with neural networks to allow complex and high-level
reasoning on neural network outputs? As a preliminary result, we propose
NeurASP -- a simple extension of answer set programs by embracing neural
networks where neural network outputs are treated as probability distributions
over atomic facts in answer set programs. We show that NeurASP can not only
improve the perception accuracy of a pre-trained neural network, but also help
to train a neural network better by giving restrictions through logic rules.
However, training with NeurASP would take much more time than pure neural
network training due to the internal use of a symbolic reasoning engine. For
future work, we plan to investigate the potential ways to solve the scalability
issue of NeurASP. One potential way is to embed logic programs directly in
neural networks. On this route, we plan to first design a SAT solver using
neural networks, then extend such a solver to allow logic programs.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER) [0.13053649021965597]
This paper introduces an approach designed to improve the performance of neural models in learning reasoning tasks.
It achieves this by integrating Answer Set Programming solvers and domain-specific expertise.
The model shows a significant improvement in solving Sudoku puzzles using only 12 puzzles for training and testing.
arXiv Detail & Related papers (2023-12-18T19:06:00Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Taming Binarized Neural Networks and Mixed-Integer Programs [2.7624021966289596]
We show that binarized neural networks admit a tame representation.
This makes it possible to use the framework of Bolte et al. for implicit differentiation.
This approach could also be used for a broader class of mixed-integer programs.
arXiv Detail & Related papers (2023-10-05T21:04:16Z) - NeurASP: Embracing Neural Networks into Answer Set Programming [5.532477732693001]
NeurASP is a simple extension of answer set programs by embracing neural networks.
By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation.
arXiv Detail & Related papers (2023-07-15T04:03:17Z) - A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming [0.0]
We will analyze the ability of neural networks to learn how to execute programs as a whole.
We will introduce the use of integrated neural learning and calculi formalization.
arXiv Detail & Related papers (2023-04-18T20:30:16Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - A neural network model of perception and reasoning [0.0]
We show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks.
We implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity.
arXiv Detail & Related papers (2020-02-26T06:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.