Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces
- URL: http://arxiv.org/abs/2209.08750v1
- Date: Mon, 19 Sep 2022 04:03:20 GMT
- Title: Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces
- Authors: Vishwa Shah, Aditya Sharma, Gautam Shroff, Lovekesh Vig, Tirtharaj
Dash, Ashwin Srinivasan
- Abstract summary: We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
- Score: 20.260546238369205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analogical Reasoning problems challenge both connectionist and symbolic AI
systems as these entail a combination of background knowledge, reasoning and
pattern recognition. While symbolic systems ingest explicit domain knowledge
and perform deductive reasoning, they are sensitive to noise and require inputs
be mapped to preset symbolic features. Connectionist systems on the other hand
can directly ingest rich input spaces such as images, text or speech and
recognize pattern even with noisy inputs. However, connectionist models
struggle to include explicit domain knowledge for deductive reasoning. In this
paper, we propose a framework that combines the pattern recognition abilities
of neural networks with symbolic reasoning and background knowledge for solving
a class of Analogical Reasoning problems where the set of attributes and
possible relations across them are known apriori. We take inspiration from the
'neural algorithmic reasoning' approach [DeepMind 2020] and use
problem-specific background knowledge by (i) learning a distributed
representation based on a symbolic model of the problem (ii) training
neural-network transformations reflective of the relations involved in the
problem and finally (iii) training a neural network encoder from images to the
distributed representation in (i). These three elements enable us to perform
search-based reasoning using neural networks as elementary functions
manipulating distributed representations. We test this on visual analogy
problems in RAVENs Progressive Matrices, and achieve accuracy competitive with
human performance and, in certain cases, superior to initial end-to-end
neural-network based approaches. While recent neural models trained at scale
yield SOTA, our novel neuro-symbolic reasoning approach is a promising
direction for this problem, and is arguably more general, especially for
problems where domain knowledge is available.
Related papers
- Exploring knowledge graph-based neural-symbolic system from application perspective [0.0]
achieving human-like reasoning and interpretability in AI systems remains a substantial challenge.
The Neural-Symbolic paradigm, which integrates neural networks with symbolic systems, presents a promising pathway toward more interpretable AI.
This paper explores recent advancements in neural-symbolic integration based on Knowledge Graphs.
arXiv Detail & Related papers (2024-05-06T14:40:50Z) - Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks [5.791414814676125]
This paper develops an innovative method that enables neural networks to generate and utilize knowledge graphs.
Our approach eschews traditional dependencies on or word embedding models, mining concepts from neural networks and directly aligning them with human knowledge.
Experiments show that our method consistently captures network-generated concepts that align closely with human knowledge and can even uncover new, useful concepts not previously identified by humans.
arXiv Detail & Related papers (2024-04-23T20:33:17Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture [4.855957436171202]
We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
arXiv Detail & Related papers (2023-02-23T17:39:46Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - A neural network model of perception and reasoning [0.0]
We show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks.
We implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity.
arXiv Detail & Related papers (2020-02-26T06:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.