Logic Tensor Networks
- URL: http://arxiv.org/abs/2012.13635v3
- Date: Sun, 17 Jan 2021 01:28:44 GMT
- Title: Logic Tensor Networks
- Authors: Samy Badreddine and Artur d'Avila Garcez and Luciano Serafini and
Michael Spranger
- Abstract summary: We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
- Score: 9.004005678155023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence agents are required to learn from their surroundings
and to reason about the knowledge that has been learned in order to make
decisions. While state-of-the-art learning from data typically uses
sub-symbolic distributed representations, reasoning is normally useful at a
higher level of abstraction with the use of a first-order logic language for
knowledge representation. As a result, attempts at combining symbolic AI and
neural computation into neural-symbolic systems have been on the increase. In
this paper, we present Logic Tensor Networks (LTN), a neurosymbolic formalism
and computational model that supports learning and reasoning through the
introduction of a many-valued, end-to-end differentiable first-order logic
called Real Logic as a representation language for deep learning. We show that
LTN provides a uniform language for the specification and the computation of
several AI tasks such as data clustering, multi-label classification,
relational learning, query answering, semi-supervised learning, regression and
embedding learning. We implement and illustrate each of the above tasks with a
number of simple explanatory examples using TensorFlow 2. Keywords:
Neurosymbolic AI, Deep Learning and Reasoning, Many-valued Logic.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Scallop: A Language for Neurosymbolic Programming [14.148819428748597]
Scallop is a language that combines the benefits of deep learning and logical reasoning.
It is capable of expressing algorithmic reasoning in diverse and challenging AI tasks.
It provides a succinct interface for machine learning programmers to integrate logical domain knowledge.
arXiv Detail & Related papers (2023-04-10T18:46:53Z) - Join-Chain Network: A Logical Reasoning View of the Multi-head Attention
in Transformer [59.73454783958702]
We propose a symbolic reasoning architecture that chains many join operators together to model output logical expressions.
In particular, we demonstrate that such an ensemble of join-chains can express a broad subset of ''tree-structured'' first-order logical expressions, named FOET.
We find that the widely used multi-head self-attention module in transformer can be understood as a special neural operator that implements the union bound of the join operator in probabilistic predicate space.
arXiv Detail & Related papers (2022-10-06T07:39:58Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Abductive Knowledge Induction From Raw Data [12.868722327487752]
We present Abductive Meta-Interpretive Learning ($Meta_Abd$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data.
Experimental results demonstrate that $Meta_Abd$ not only outperforms the compared systems in predictive accuracy and data efficiency.
arXiv Detail & Related papers (2020-10-07T16:33:28Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.