Learning logic programs by discovering higher-order abstractions
- URL: http://arxiv.org/abs/2308.08334v2
- Date: Mon, 29 Jan 2024 18:34:39 GMT
- Title: Learning logic programs by discovering higher-order abstractions
- Authors: C\'eline Hocquette, Sebastijan Duman\v{c}i\'c, Andrew Cropper
- Abstract summary: We introduce the higher-order optimisation problem.
The goal is to compress a logic program by discovering higher-order abstractions.
We implement our approach in Stevie, which formulates the problem as a constraint problem.
- Score: 20.57989636488575
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce the higher-order refactoring problem, where the goal is to
compress a logic program by discovering higher-order abstractions, such as map,
filter, and fold. We implement our approach in Stevie, which formulates the
refactoring problem as a constraint optimisation problem. Our experiments on
multiple domains, including program synthesis and visual reasoning, show that
refactoring can improve the learning performance of an inductive logic
programming system, specifically improving predictive accuracies by 27% and
reducing learning times by 47%. We also show that Stevie can discover
abstractions that transfer to multiple domains.
Related papers
- Scalable Knowledge Refactoring using Constrained Optimisation [18.706442683121615]
We show that our approach can programs quicker and with more compression than the previous state-of-the-art approach, sometimes by 60%.
Our empirical results on multiple domains show that our approach can programs quicker and with more compression than the previous state-of-the-art approach, sometimes by 60%.
arXiv Detail & Related papers (2024-08-21T11:12:42Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Automatizing Software Cognitive Complexity Reduction through Integer
Linear Programming [1.1970409518725493]
Recently, we modeled software cognitive complexity reduction as an optimization problem and we proposed an approach to assist developers on this task.
This approach enumerates sequences of code extraction operations until a stopping criterion is met. As a result, it returns the minimal sequence of code extraction operations that is able to reduce the cognitive complexity of a code to the given threshold.
arXiv Detail & Related papers (2024-02-08T10:53:00Z) - ReGAL: Refactoring Programs to Discover Generalizable Abstractions [59.05769810380928]
Generalizable Abstraction Learning (ReGAL) is a method for learning a library of reusable functions via codeization.
We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains.
For CodeLlama-13B, ReGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, outperforming GPT-3.5 in two of three domains.
arXiv Detail & Related papers (2024-01-29T18:45:30Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - End-to-end Algorithm Synthesis with Recurrent Networks: Logical
Extrapolation Without Overthinking [52.05847268235338]
We show how machine learning systems can perform logical extrapolation without overthinking problems.
We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten.
We also employ a progressive training routine that prevents the model from learning behaviors that are specific to number and instead pushes it to learn behaviors that can be repeated indefinitely.
arXiv Detail & Related papers (2022-02-11T18:43:28Z) - Towards Minimax Optimal Reinforcement Learning in Factored Markov
Decision Processes [53.72166325215299]
We study minimax optimal reinforcement learning in episodic factored Markov decision processes (FMDPs)
First one achieves minimax optimal regret guarantees for a rich class of factored structures.
Second one enjoys better computational complexity with a slightly worse regret.
arXiv Detail & Related papers (2020-06-24T00:50:17Z) - Knowledge Refactoring for Inductive Program Synthesis [37.54933305877746]
Humans constantly restructure knowledge to use it more efficiently.
Our goal is to give a machine learning system similar abilities so that it can learn more efficiently.
arXiv Detail & Related papers (2020-04-21T12:04:38Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.