Differentiable Logic Machines
- URL: http://arxiv.org/abs/2102.11529v5
- Date: Wed, 5 Jul 2023 22:00:05 GMT
- Title: Differentiable Logic Machines
- Authors: Matthieu Zimmer and Xuening Feng and Claire Glanois and Zhaohui Jiang
and Jianyi Zhang and Paul Weng and Dong Li and Jianye Hao and Wulong Liu
- Abstract summary: We propose a novel neural-logic architecture, called differentiable logic machine (DLM)
DLM can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems.
On RL problems, without requiring an interpretable solution, DLM outperforms other non-interpretable neural-logic RL approaches.
- Score: 38.21461039738474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of reasoning, learning, and decision-making is key to build
more general artificial intelligence systems. As a step in this direction, we
propose a novel neural-logic architecture, called differentiable logic machine
(DLM), that can solve both inductive logic programming (ILP) and reinforcement
learning (RL) problems, where the solution can be interpreted as a first-order
logic program. Our proposition includes several innovations. Firstly, our
architecture defines a restricted but expressive continuous relaxation of the
space of first-order logic programs by assigning weights to predicates instead
of rules, in contrast to most previous neural-logic approaches. Secondly, with
this differentiable architecture, we propose several (supervised and RL)
training procedures, based on gradient descent, which can recover a
fully-interpretable solution (i.e., logic formula). Thirdly, to accelerate RL
training, we also design a novel critic architecture that enables actor-critic
algorithms. Fourthly, to solve hard problems, we propose an incremental
training procedure that can learn a logic program progressively. Compared to
state-of-the-art (SOTA) differentiable ILP methods, DLM successfully solves all
the considered ILP problems with a higher percentage of successful seeds (up to
3.5$\times$). On RL problems, without requiring an interpretable solution, DLM
outperforms other non-interpretable neural-logic RL approaches in terms of
rewards (up to 3.9%). When enforcing interpretability, DLM can solve harder RL
problems (e.g., Sorting, Path) Moreover, we show that deep logic programs can
be learned via incremental supervised training. In addition to this excellent
performance, DLM can scale well in terms of memory and computational time,
especially during the testing phase where it can deal with much more constants
($>$2$\times$) than SOTA.
Related papers
- Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths [69.39559168050923]
We introduce Reasoning Paths Optimization (RPO), which enables learning to reason and explore from diverse paths.
Our approach encourages favorable branches at each reasoning step while penalizing unfavorable ones, enhancing the model's overall problem-solving performance.
We focus on multi-step reasoning tasks, such as math word problems and science-based exam questions.
arXiv Detail & Related papers (2024-10-07T06:37:25Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Guided Evolution with Binary Discriminators for ML Program Search [64.44893463120584]
We propose guiding evolution with a binary discriminator, trained online to distinguish which program is better given a pair of programs.
We demonstrate our method can speed up evolution across a set of diverse problems including a 3.7x speedup on the symbolic search for MLs and a 4x speedup for RL loss functions.
arXiv Detail & Related papers (2024-02-08T16:59:24Z) - Assessing Logical Reasoning Capabilities of Encoder-Only Transformer Models [0.13194391758295113]
We investigate the extent to which encoder-only transformer language models (LMs) can reason according to logical rules.
We show for several encoder-only LMs that they can be trained, to a reasonable degree, to determine logical validity on various datasets.
By cross-probing fine-tuned models on these datasets, we show that LMs have difficulty in transferring their putative logical reasoning ability.
arXiv Detail & Related papers (2023-12-18T21:42:34Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - AISYN: AI-driven Reinforcement Learning-Based Logic Synthesis Framework [0.8356765961526955]
We believe that Artificial Intelligence (AI) and Reinforcement Learning (RL) algorithms can help in solving this problem.
Our experiments on both open source and industrial benchmark circuits show that significant improvements on important metrics such as area, delay, and power can be achieved by making logic synthesis optimization functions AI-driven.
arXiv Detail & Related papers (2023-02-08T00:55:24Z) - End-to-end Algorithm Synthesis with Recurrent Networks: Logical
Extrapolation Without Overthinking [52.05847268235338]
We show how machine learning systems can perform logical extrapolation without overthinking problems.
We propose a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten.
We also employ a progressive training routine that prevents the model from learning behaviors that are specific to number and instead pushes it to learn behaviors that can be repeated indefinitely.
arXiv Detail & Related papers (2022-02-11T18:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.