A Neuro-vector-symbolic Architecture for Solving Raven's Progressive
Matrices
- URL: http://arxiv.org/abs/2203.04571v1
- Date: Wed, 9 Mar 2022 08:29:21 GMT
- Title: A Neuro-vector-symbolic Architecture for Solving Raven's Progressive
Matrices
- Authors: Michael Hersche, Mustafa Zeqiri, Luca Benini, Abu Sebastian, Abbas
Rahimi
- Abstract summary: Neuro-vector-symbolic architecture (NVSA) is proposed to combine the best of deep neural networks and symbolic logical reasoning.
We show that NVSA achieves a new record of 97.7% average accuracy in RAVEN, and 98.8% in I-RAVEN datasets, with two orders of magnitude faster execution than the symbolic logical reasoning on CPUs.
- Score: 15.686742809374024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neither deep neural networks nor symbolic AI alone have approached the kind
of intelligence expressed in humans. This is mainly because neural networks are
not able to decompose distinct objects from their joint representation (the
so-called binding problem), while symbolic AI suffers from exhaustive rule
searches, among other problems. These two problems are still pronounced in
neuro-symbolic AI which aims to combine the best of the two paradigms. Here, we
show that the two problems can be addressed with our proposed
neuro-vector-symbolic architecture (NVSA) by exploiting its powerful operators
on fixed-width holographic vectorized representations that serve as a common
language between neural networks and symbolic logical reasoning. The efficacy
of NVSA is demonstrated by solving the Raven's progressive matrices. NVSA
achieves a new record of 97.7% average accuracy in RAVEN, and 98.8% in I-RAVEN
datasets, with two orders of magnitude faster execution than the symbolic
logical reasoning on CPUs.
Related papers
- Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.
We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.
We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Formal Explanations for Neuro-Symbolic AI [28.358183683756028]
This paper proposes a formal approach to explaining the decisions of neuro-symbolic systems.
It first computes a formal explanation for the symbolic component of the system, which serves to identify a subset of the individual parts of neural information that needs to be explained.
This is followed by explaining only those individual neural inputs, independently of each other, which facilitates succinctness of hierarchical formal explanations.
arXiv Detail & Related papers (2024-10-18T07:08:31Z) - Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture [22.274696991107206]
Neuro-symbolic AI emerges as a promising paradigm, fusing neural and symbolic approaches to enhance interpretability, robustness, and trustworthiness.
Recent neuro-symbolic systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities.
We first systematically categorize neuro-symbolic AI algorithms, and then experimentally evaluate and analyze them in terms of runtime, memory, computational operators, sparsity, and system characteristics.
arXiv Detail & Related papers (2024-09-20T01:32:14Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Hyperdimensional Computing with Spiking-Phasor Neurons [0.9594432031144714]
Symbolic Vector Architectures (VSAs) are a powerful framework for representing compositional reasoning.
We run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.
arXiv Detail & Related papers (2023-02-28T20:09:12Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks [130.70449023574537]
Our NS-r consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers.
Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning.
arXiv Detail & Related papers (2021-07-03T13:14:58Z) - Text Classification based on Multi-granularity Attention Hybrid Neural
Network [4.718408602093766]
We propose a hybrid architecture based on a novel hierarchical multi-granularity attention mechanism, named Multi-granularity Attention-based Hybrid Neural Network (MahNN)
The attention mechanism is to assign different weights to different parts of the input sequence to increase the computation efficiency and performance of neural models.
arXiv Detail & Related papers (2020-08-12T13:02:48Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.