The ILASP system for Inductive Learning of Answer Set Programs
- URL: http://arxiv.org/abs/2005.00904v1
- Date: Sat, 2 May 2020 19:04:12 GMT
- Title: The ILASP system for Inductive Learning of Answer Set Programs
- Authors: Mark Law, Alessandra Russo, Krysia Broda
- Abstract summary: Our system learns Answer Set Programs, including normal rules, choice rules and hard and weak constraints.
We first give a general overview of ILASP's learning framework and its capabilities.
This is followed by a comprehensive summary of the evolution of the ILASP system.
- Score: 79.41112438865386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of Inductive Logic Programming (ILP) is to learn a program that
explains a set of examples in the context of some pre-existing background
knowledge. Until recently, most research on ILP targeted learning Prolog
programs. Our own ILASP system instead learns Answer Set Programs, including
normal rules, choice rules and hard and weak constraints. Learning such
expressive programs widens the applicability of ILP considerably; for example,
enabling preference learning, learning common-sense knowledge, including
defaults and exceptions, and learning non-deterministic theories. In this
paper, we first give a general overview of ILASP's learning framework and its
capabilities. This is followed by a comprehensive summary of the evolution of
the ILASP system, presenting the strengths and weaknesses of each version, with
a particular emphasis on scalability.
Related papers
- Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - Enabling Large Language Models to Learn from Rules [99.16680531261987]
We are inspired that humans can learn the new tasks or knowledge in another way by learning from rules.
We propose rule distillation, which first uses the strong in-context abilities of LLMs to extract the knowledge from the textual rules.
Our experiments show that making LLMs learn from rules by our method is much more efficient than example-based learning in both the sample size and generalization ability.
arXiv Detail & Related papers (2023-11-15T11:42:41Z) - Hierarchical Programmatic Reinforcement Learning via Learning to Compose
Programs [58.94569213396991]
We propose a hierarchical programmatic reinforcement learning framework to produce program policies.
By learning to compose programs, our proposed framework can produce program policies that describe out-of-distributionally complex behaviors.
The experimental results in the Karel domain show that our proposed framework outperforms baselines.
arXiv Detail & Related papers (2023-01-30T14:50:46Z) - Generalisation Through Negation and Predicate Invention [25.944127431156627]
We introduce an inductive logic programming (ILP) approach that combines negation and predicate invention.
We implement our idea in NOPI, which can learn normal logic programs with predicate invention.
Our experimental results on multiple domains show that our approach can improve predictive accuracies and learning times.
arXiv Detail & Related papers (2023-01-18T16:12:27Z) - LISA: Learning Interpretable Skill Abstractions from Language [85.20587800593293]
We propose a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations.
Our method demonstrates a more natural way to condition on language in sequential decision-making problems.
arXiv Detail & Related papers (2022-02-28T19:43:24Z) - Learning to Synthesize Programs as Interpretable and Generalizable
Policies [25.258598215642067]
We present a framework that learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner.
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines.
arXiv Detail & Related papers (2021-08-31T07:03:06Z) - Conflict-driven Inductive Logic Programming [3.29505746524162]
The goal of Inductive Logic Programming (ILP) is to learn a program that explains a set of examples.
Until recently, most research on ILP targeted learning Prolog programs.
The ILASP system instead learns Answer Set Programs (ASP)
arXiv Detail & Related papers (2020-12-31T20:24:28Z) - Incorporating Relational Background Knowledge into Reinforcement
Learning via Differentiable Inductive Logic Programming [8.122270502556374]
We propose a novel deepReinforcement Learning (RRL) based on a differentiable Inductive Logic Programming (ILP)
We show the efficacy of this novel RRL framework using environments such as BoxWorld, GridWorld as well as relational reasoning for the Sort-of-CLEVR dataset.
arXiv Detail & Related papers (2020-03-23T16:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.