Deciphering Raw Data in Neuro-Symbolic Learning with Provable Guarantees
- URL: http://arxiv.org/abs/2308.10487v2
- Date: Tue, 23 Jan 2024 08:18:27 GMT
- Title: Deciphering Raw Data in Neuro-Symbolic Learning with Provable Guarantees
- Authors: Lue Tao, Yu-Xuan Huang, Wang-Zhou Dai, Yuan Jiang
- Abstract summary: Neuro-symbolic hybrid systems are promising for integrating machine learning and symbolic reasoning.
It remains unclear why a hybrid system succeeds for a specific task and when it may fail given a different knowledge base.
We introduce a novel way of characterising supervision signals from a knowledge base, and establish a criterion for determining the knowledge's efficacy in facilitating successful learning.
- Score: 17.58485742162185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neuro-symbolic hybrid systems are promising for integrating machine learning
and symbolic reasoning, where perception models are facilitated with
information inferred from a symbolic knowledge base through logical reasoning.
Despite empirical evidence showing the ability of hybrid systems to learn
accurate perception models, the theoretical understanding of learnability is
still lacking. Hence, it remains unclear why a hybrid system succeeds for a
specific task and when it may fail given a different knowledge base. In this
paper, we introduce a novel way of characterising supervision signals from a
knowledge base, and establish a criterion for determining the knowledge's
efficacy in facilitating successful learning. This, for the first time, allows
us to address the two questions above by inspecting the knowledge base under
investigation. Our analysis suggests that many knowledge bases satisfy the
criterion, thus enabling effective learning, while some fail to satisfy it,
indicating potential failures. Comprehensive experiments confirm the utility of
our criterion on benchmark tasks.
Related papers
- Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Worth of knowledge in deep learning [3.132595571344153]
We present a framework inspired by interpretable machine learning to evaluate the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge, including dependence, synergistic, and substitution effects.
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
arXiv Detail & Related papers (2023-07-03T02:25:19Z) - Thrill-K Architecture: Towards a Solution to the Problem of Knowledge
Based Understanding [0.9390008801320021]
We introduce a classification of hybrid systems which, based on an analysis of human knowledge and intelligence, combines neural learning with various types of knowledge and knowledge sources.
We present the Thrill-K architecture as a prototypical solution for integrating instantaneous knowledge, standby knowledge and external knowledge sources in a framework capable of inference, learning and intelligent control.
arXiv Detail & Related papers (2023-02-28T20:39:35Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Understanding Few-Shot Commonsense Knowledge Models [39.31365020474205]
We investigate training commonsense knowledge models in a few-shot setting.
We find that human quality ratings for knowledge produced from a few-shot trained system can achieve performance within 6% of knowledge produced from fully supervised systems.
arXiv Detail & Related papers (2021-01-01T19:01:09Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.