The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning
- URL: http://arxiv.org/abs/2402.01889v1
- Date: Fri, 2 Feb 2024 20:33:14 GMT
- Title: The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning
- Authors: Daniel Cunnington, Mark Law, Jorge Lobo, Alessandra Russo
- Abstract summary: Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
- Score: 54.56905063752427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI
systems, as interpretable symbolic techniques provide formal behaviour
guarantees. The challenge is how to effectively integrate neural and symbolic
computation, to enable learning and reasoning from raw data. Existing pipelines
that train the neural and symbolic components sequentially require extensive
labelling, whereas end-to-end approaches are limited in terms of scalability,
due to the combinatorial explosion in the symbol grounding problem. In this
paper, we leverage the implicit knowledge within foundation models to enhance
the performance in NeSy tasks, whilst reducing the amount of data labelling and
manual engineering. We introduce a new architecture, called NeSyGPT, which
fine-tunes a vision-language foundation model to extract symbolic features from
raw data, before learning a highly expressive answer set program to solve a
downstream task. Our comprehensive evaluation demonstrates that NeSyGPT has
superior accuracy over various baselines, and can scale to complex NeSy tasks.
Finally, we highlight the effective use of a large language model to generate
the programmatic interface between the neural and symbolic components,
significantly reducing the amount of manual engineering required.
Related papers
- Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture [22.274696991107206]
Neuro-symbolic AI emerges as a promising paradigm, fusing neural and symbolic approaches to enhance interpretability, robustness, and trustworthiness.
Recent neuro-symbolic systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities.
We first systematically categorize neuro-symbolic AI algorithms, and then experimentally evaluate and analyze them in terms of runtime, memory, computational operators, sparsity, and system characteristics.
arXiv Detail & Related papers (2024-09-20T01:32:14Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition [0.7734726150561088]
We propose a novel approach based on a semantic loss function that infuses knowledge constraints in the Human Activity Recognition model during the training phase.
Our results on scripted and in-the-wild datasets show the impact of different semantic loss functions in outperforming a purely data-driven model.
arXiv Detail & Related papers (2023-06-08T09:23:09Z) - Symbolic Synthesis of Neural Networks [0.0]
I present Graph-basedally Synthesized Neural Networks (GSSNNs)
GSSNNs are a form of neural network whose topology and parameters are informed by the output of a symbolic program.
I demonstrate that by developing symbolic abstractions at a population level, I can elicit reliable patterns of improved generalization with small quantities of data known to contain local and discrete features.
arXiv Detail & Related papers (2023-03-06T18:13:14Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - SLASH: Embracing Probabilistic Circuits into Neural Answer Set
Programming [15.814914345000574]
We introduce SLASH -- a novel deep probabilistic programming language (DPPL)
At its core, SLASH consists of Neural-Probabilistic Predicates (NPPs) and logical programs which are united via answer set programming.
We evaluate SLASH on the benchmark data of MNIST addition as well as novel tasks for DPPLs such as missing data prediction and set prediction with state-of-the-art performance.
arXiv Detail & Related papers (2021-10-07T12:35:55Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.