Neural Logic Reasoning
- URL: http://arxiv.org/abs/2008.09514v1
- Date: Thu, 20 Aug 2020 14:53:23 GMT
- Title: Neural Logic Reasoning
- Authors: Shaoyun Shi, Hanxiong Chen, Weizhi Ma, Jiaxin Mao, Min Zhang, Yongfeng
Zhang
- Abstract summary: We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
- Score: 47.622957656745356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the success of deep neural networks in many
research areas. The fundamental idea behind the design of most neural networks
is to learn similarity patterns from data for prediction and inference, which
lacks the ability of cognitive reasoning. However, the concrete ability of
reasoning is critical to many theoretical and practical problems. On the other
hand, traditional symbolic reasoning methods do well in making logical
inference, but they are mostly hard rule-based reasoning, which limits their
generalization ability to different tasks since difference tasks may require
different rules. Both reasoning and generalization ability are important for
prediction tasks such as recommender systems, where reasoning provides strong
connection between user history and target items for accurate prediction, and
generalization helps the model to draw a robust user portrait over noisy
inputs.
In this paper, we propose Logic-Integrated Neural Network (LINN) to integrate
the power of deep learning and logic reasoning. LINN is a dynamic neural
architecture that builds the computational graph according to input logical
expressions. It learns basic logical operations such as AND, OR, NOT as neural
modules, and conducts propositional logical reasoning through the network for
inference. Experiments on theoretical task show that LINN achieves significant
performance on solving logical equations and variables. Furthermore, we test
our approach on the practical task of recommendation by formulating the task
into a logical inference problem. Experiments show that LINN significantly
outperforms state-of-the-art recommendation models in Top-K recommendation,
which verifies the potential of LINN in practice.
Related papers
- LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using
Continual Learning [2.912595438026074]
We show that by combining a neural-symbolic system with methods from continual learning, Logic Networks can obtain a higher level of accuracy.
Continual learning is added to LTNs by adopting a curriculum of learning from knowledge and data with recall.
Results indicate significant improvement on the non-monotonic reasoning problem.
arXiv Detail & Related papers (2023-05-03T15:11:34Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Evaluating Relaxations of Logic for Neural Networks: A Comprehensive
Study [17.998891912502092]
We study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem.
We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios.
arXiv Detail & Related papers (2021-07-28T21:16:58Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Neural Collaborative Reasoning [31.03627817834551]
We propose Collaborative Filtering (CF) to Collaborative Reasoning (CR)
CR means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other.
We integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives.
arXiv Detail & Related papers (2020-05-16T23:29:31Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.