Learning Syllogism with Euler Neural-Networks
- URL: http://arxiv.org/abs/2007.07320v2
- Date: Mon, 20 Jul 2020 09:58:24 GMT
- Title: Learning Syllogism with Euler Neural-Networks
- Authors: Tiansi Dong, Chengjiang Li, Christian Bauckhage, Juanzi Li, Stefan
Wrobel, Armin B. Cremers
- Abstract summary: The central vector of a ball is a vector that can inherit representation power of traditional neural network.
A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises.
In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism.
- Score: 20.47827965932698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional neural networks represent everything as a vector, and are able to
approximate a subset of logical reasoning to a certain degree. As basic logic
relations are better represented by topological relations between regions, we
propose a novel neural network that represents everything as a ball and is able
to learn topological configuration as an Euler diagram. So comes the name Euler
Neural-Network (ENN). The central vector of a ball is a vector that can inherit
representation power of traditional neural network. ENN distinguishes four
spatial statuses between balls, namely, being disconnected, being partially
overlapped, being part of, being inverse part of. Within each status, ideal
values are defined for efficient reasoning. A novel back-propagation algorithm
with six Rectified Spatial Units (ReSU) can optimize an Euler diagram
representing logical premises, from which logical conclusion can be deduced. In
contrast to traditional neural network, ENN can precisely represent all 24
different structures of Syllogism. Two large datasets are created: one
extracted from WordNet-3.0 covers all types of Syllogism reasoning, the other
extracted all family relations from DBpedia. Experiment results approve the
superior power of ENN in logical representation and reasoning. Datasets and
source code are available upon request.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Nearest Neighbor Representations of Neural Circuits [12.221087476416056]
Nearest Neighbor (NN) representations is a novel approach of computation.
We provide explicit constructions for their NN representation with an explicit bound on the number of bits.
Example functions include NN representations of convex polytopes (AND of threshold gates), IP2, OR of threshold gates, and linear or exact decision lists.
arXiv Detail & Related papers (2024-02-13T19:38:01Z) - Closed-Form Interpretation of Neural Network Classifiers with Symbolic Gradients [0.7832189413179361]
I introduce a unified framework for finding a closed-form interpretation of any single neuron in an artificial neural network.
I demonstrate how to interpret neural network classifiers to reveal closed-form expressions of the concepts encoded in their decision boundaries.
arXiv Detail & Related papers (2024-01-10T07:47:42Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Binary Multi Channel Morphological Neural Network [5.551756485554158]
We introduce a Binary Morphological Neural Network (BiMoNN) built upon the convolutional neural network.
We demonstrate an equivalence between BiMoNNs and morphological operators that we can use to binarize entire networks.
These can learn classical morphological operators and show promising results on a medical imaging application.
arXiv Detail & Related papers (2022-04-19T09:26:11Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - NeuralLog: a Neural Logic Language [0.0]
We propose NeuralLog, a first-order logic language that is compiled to a neural network.
The main goal of NeuralLog is to bridge logic programming and deep learning.
We have shown that NeuralLog can learn link prediction and classification tasks, using the same theory as the compared systems.
arXiv Detail & Related papers (2021-05-04T12:09:35Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.