Dynamic MOdularized Reasoning for Compositional Structured Explanation
Generation
- URL: http://arxiv.org/abs/2309.07624v1
- Date: Thu, 14 Sep 2023 11:40:30 GMT
- Title: Dynamic MOdularized Reasoning for Compositional Structured Explanation
Generation
- Authors: Xiyan Fu, Anette Frank
- Abstract summary: We propose a dynamic modularized reasoning model, MORSE, to improve compositional generalization of neural models.
MORSE factorizes the inference process into a combination of modules, where each module represents a functional unit.
We conduct experiments for increasing lengths and shapes of reasoning trees on two benchmarks to test MORSE's compositional generalization abilities.
- Score: 29.16040150962427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the success of neural models in solving reasoning tasks, their
compositional generalization capabilities remain unclear. In this work, we
propose a new setting of the structured explanation generation task to
facilitate compositional reasoning research. Previous works found that symbolic
methods achieve superior compositionality by using pre-defined inference rules
for iterative reasoning. But these approaches rely on brittle symbolic
transfers and are restricted to well-defined tasks. Hence, we propose a dynamic
modularized reasoning model, MORSE, to improve the compositional generalization
of neural models. MORSE factorizes the inference process into a combination of
modules, where each module represents a functional unit. Specifically, we adopt
modularized self-attention to dynamically select and route inputs to dedicated
heads, which specializes them to specific functions. We conduct experiments for
increasing lengths and shapes of reasoning trees on two benchmarks to test
MORSE's compositional generalization abilities, and find it outperforms
competitive baselines. Model ablation and deeper analyses show the
effectiveness of dynamic reasoning modules and their generalization abilities.
Related papers
- What makes Models Compositional? A Theoretical View: With Supplement [60.284698521569936]
We propose a general neuro-symbolic definition of compositional functions and their compositional complexity.
We show how various existing general and special purpose sequence processing models fit this definition and use it to analyze their compositional complexity.
arXiv Detail & Related papers (2024-05-02T20:10:27Z) - Discovering modular solutions that generalize compositionally [55.46688816816882]
We show that identification up to linear transformation purely from demonstrations is possible without having to learn an exponential number of module combinations.
We further demonstrate empirically that meta-learning from finite data can discover modular policies that generalize compositionally in a number of complex environments.
arXiv Detail & Related papers (2023-12-22T16:33:50Z) - Disentangling Reasoning Capabilities from Language Models with
Compositional Reasoning Transformers [72.04044221898059]
ReasonFormer is a unified reasoning framework for mirroring the modular and compositional reasoning process of humans.
The representation module (automatic thinking) and reasoning modules (controlled thinking) are disentangled to capture different levels of cognition.
The unified reasoning framework solves multiple tasks with a single model,and is trained and inferred in an end-to-end manner.
arXiv Detail & Related papers (2022-10-20T13:39:55Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Exploring End-to-End Differentiable Natural Logic Modeling [21.994060519995855]
We explore end-to-end trained differentiable models that integrate natural logic with neural networks.
The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information.
arXiv Detail & Related papers (2020-11-08T18:18:15Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z) - Obtaining Faithful Interpretations from Compositional Neural Networks [72.41100663462191]
We evaluate the intermediate outputs of NMNs on NLVR2 and DROP datasets.
We find that the intermediate outputs differ from the expected output, illustrating that the network structure does not provide a faithful explanation of model behaviour.
arXiv Detail & Related papers (2020-05-02T06:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.