Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture
- URL: http://arxiv.org/abs/2302.12195v1
- Date: Thu, 23 Feb 2023 17:39:46 GMT
- Title: Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture
- Authors: Paulo Shakarian, Gerardo I. Simari
- Abstract summary: We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
- Score: 4.855957436171202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While deep neural networks have led to major advances in image recognition,
language translation, data mining, and game playing, there are well-known
limits to the paradigm such as lack of explainability, difficulty of
incorporating prior knowledge, and modularity. Neuro symbolic hybrid systems
have recently emerged as a straightforward way to extend deep neural networks
by incorporating ideas from symbolic reasoning such as computational logic. In
this paper, we propose a list desirable criteria for neuro symbolic systems and
examine how some of the existing approaches address these criteria. We then
propose an extension to generalized annotated logic that allows for the
creation of an equivalent neural architecture comprising an alternate neuro
symbolic hybrid. However, unlike previous approaches that rely on continuous
optimization for the training process, our framework is designed as a binarized
neural network that uses discrete optimization. We provide proofs of
correctness and discuss several of the challenges that must be overcome to
realize this framework in an implemented system.
Related papers
- Neuro-symbolic Learning Yielding Logical Constraints [22.649543443988712]
end-to-end learning of neuro-symbolic systems is still an unsolved challenge.
We propose a framework that fuses the network, symbol grounding, and logical constraint synthesisto-end learning process.
arXiv Detail & Related papers (2024-10-28T12:18:25Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - A Semantic Framework for Neuro-Symbolic Computing [0.36832029288386137]
We provide a formal definition of semantic encoding, specifying the components and conditions under which a knowledge-base can be encoded.
We show that many neuro-symbolic approaches are accounted for by this definition.
This is expected to provide a guidance to future neuro-symbolic encodings by placing them in the broader context of the semantic encoding of entire families of existing neuro-symbolic systems.
arXiv Detail & Related papers (2022-12-22T22:00:58Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z) - Gaussian Process Surrogate Models for Neural Networks [6.8304779077042515]
In science and engineering, modeling is a methodology used to understand complex systems whose internal processes are opaque.
We construct a class of surrogate models for neural networks using Gaussian processes.
We demonstrate our approach captures existing phenomena related to the spectral bias of neural networks, and then show that our surrogate models can be used to solve practical problems.
arXiv Detail & Related papers (2022-08-11T20:17:02Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Learn Like The Pro: Norms from Theory to Size Neural Computation [3.848947060636351]
We investigate how dynamical systems with nonlinearities can inform the design of neural systems that seek to emulate them.
We propose a Learnability metric and quantify its associated features to the near-equilibrium behavior of learning dynamics.
It reveals exact sizing for a class of neural networks with multiplicative nodes that mimic continuous- or discrete-time dynamics.
arXiv Detail & Related papers (2021-06-21T20:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.