Abductive Knowledge Induction From Raw Data
- URL: http://arxiv.org/abs/2010.03514v2
- Date: Thu, 20 May 2021 12:45:50 GMT
- Title: Abductive Knowledge Induction From Raw Data
- Authors: Wang-Zhou Dai, Stephen H. Muggleton
- Abstract summary: We present Abductive Meta-Interpretive Learning ($Meta_Abd$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data.
Experimental results demonstrate that $Meta_Abd$ not only outperforms the compared systems in predictive accuracy and data efficiency.
- Score: 12.868722327487752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For many reasoning-heavy tasks involving raw inputs, it is challenging to
design an appropriate end-to-end learning pipeline. Neuro-Symbolic Learning,
divide the process into sub-symbolic perception and symbolic reasoning, trying
to utilise data-driven machine learning and knowledge-driven reasoning
simultaneously. However, they suffer from the exponential computational
complexity within the interface between these two components, where the
sub-symbolic learning model lacks direct supervision, and the symbolic model
lacks accurate input facts. Hence, most of them assume the existence of a
strong symbolic knowledge base and only learn the perception model while
avoiding a crucial problem: where does the knowledge come from? In this paper,
we present Abductive Meta-Interpretive Learning ($Meta_{Abd}$) that unites
abduction and induction to learn neural networks and induce logic theories
jointly from raw data. Experimental results demonstrate that $Meta_{Abd}$ not
only outperforms the compared systems in predictive accuracy and data
efficiency but also induces logic programs that can be re-used as background
knowledge in subsequent learning tasks. To the best of our knowledge,
$Meta_{Abd}$ is the first system that can jointly learn neural networks from
scratch and induce recursive first-order logic theories with predicate
invention.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Generating by Understanding: Neural Visual Generation with Logical
Symbol Groundings [26.134405924834525]
We propose a neurosymbolic learning approach, Abductive visual Generation (AbdGen), for integrating logic programming systems with neural visual generative models.
Results show that compared to the baseline approaches, AbdGen requires significantly less labeled data for symbol assignment.
AbdGen can effectively learn underlying logical generative rules from data, which is out of the capability of existing approaches.
arXiv Detail & Related papers (2023-10-26T15:00:21Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.