Neuro-Symbolic Hierarchical Rule Induction
- URL: http://arxiv.org/abs/2112.13418v1
- Date: Sun, 26 Dec 2021 17:02:14 GMT
- Title: Neuro-Symbolic Hierarchical Rule Induction
- Authors: Claire Glanois, Xuening Feng, Zhaohui Jiang, Paul Weng, Matthieu
Zimmer, Dong Li, Wulong Liu
- Abstract summary: We propose an efficient interpretable neuro-symbolic model to solve Inductive Logic Programming (ILP) problems.
In this model, which is built from a set of meta-rules organised in a hierarchical structure, first-order rules are invented by learning embeddings to match facts and body predicates of a meta-rule.
We empirically validate our model on various tasks (ILP, visual genome, reinforcement learning) against several state-of-the-art methods.
- Score: 12.610497441047395
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose an efficient interpretable neuro-symbolic model to solve Inductive
Logic Programming (ILP) problems. In this model, which is built from a set of
meta-rules organised in a hierarchical structure, first-order rules are
invented by learning embeddings to match facts and body predicates of a
meta-rule. To instantiate it, we specifically design an expressive set of
generic meta-rules, and demonstrate they generate a consequent fragment of Horn
clauses. During training, we inject a controlled \pw{Gumbel} noise to avoid
local optima and employ interpretability-regularization term to further guide
the convergence to interpretable rules. We empirically validate our model on
various tasks (ILP, visual genome, reinforcement learning) against several
state-of-the-art methods.
Related papers
- A Scalable Matrix Visualization for Understanding Tree Ensemble Classifiers [20.416696003269674]
This paper introduces a scalable visual analysis method to explain tree ensemble classifiers that contain tens of thousands of rules.
We develop an anomaly-biased model reduction method to prioritize these rules at each hierarchical level.
Our method fosters a deeper understanding of both common and anomalous rules, thereby enhancing interpretability without sacrificing comprehensiveness.
arXiv Detail & Related papers (2024-09-05T01:48:11Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Distilling Task-specific Logical Rules from Large Pre-trained Models [24.66436804853525]
We develop a novel framework to distill task-specific logical rules from large pre-trained models.
Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules.
Experiments on three public named entity tagging benchmarks demonstrate the effectiveness of our proposed framework.
arXiv Detail & Related papers (2022-10-06T09:12:18Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Differentiable Rule Induction with Learned Relational Features [9.193818627108572]
Rule Network (RRN) is a neural architecture that learns predicates that represent a linear relationship among attributes along with the rules that use them.
On benchmark tasks we show that these predicates are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state of the art rule induction algorithms.
arXiv Detail & Related papers (2022-01-17T16:46:50Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Discrete Word Embedding for Logical Natural Language Understanding [5.8088738147746914]
We propose an unsupervised neural model for learning a discrete embedding of words.
Our embedding represents each word as a set of propositional statements describing a transition rule in classical/STRIPS planning formalism.
arXiv Detail & Related papers (2020-08-26T16:15:18Z) - Rewriting a Deep Generative Model [56.91974064348137]
We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
arXiv Detail & Related papers (2020-07-30T17:58:16Z) - Guiding Symbolic Natural Language Grammar Induction via
Transformer-Based Sequence Probabilities [0.0]
A novel approach to automated learning of syntactic rules governing natural languages is proposed.
This method exploits the learned linguistic knowledge in transformers, without any reference to their inner representations.
We show a proof-of-concept example of our proposed technique, using it to guide unsupervised symbolic link-grammar induction methods.
arXiv Detail & Related papers (2020-05-26T06:18:47Z) - Learning Compositional Rules via Neural Program Synthesis [67.62112086708859]
We present a neuro-symbolic model which learns entire rule systems from a small set of examples.
Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples.
arXiv Detail & Related papers (2020-03-12T01:06:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.