Neuro-symbolic Natural Logic with Introspective Revision for Natural
Language Inference
- URL: http://arxiv.org/abs/2203.04857v1
- Date: Wed, 9 Mar 2022 16:31:58 GMT
- Title: Neuro-symbolic Natural Logic with Introspective Revision for Natural
Language Inference
- Authors: Yufei Feng, Xiaoyu Yang, Xiaodan Zhu, Michael Greenspan
- Abstract summary: We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision.
The proposed model has built-in interpretability and shows superior capability in monotonicity inference, systematic generalization, and interpretability.
- Score: 17.636872632724582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a neuro-symbolic natural logic framework based on reinforcement
learning with introspective revision. The model samples and rewards specific
reasoning paths through policy gradient, in which the introspective revision
algorithm modifies intermediate symbolic reasoning steps to discover
reward-earning operations as well as leverages external knowledge to alleviate
spurious reasoning and training inefficiency. The framework is supported by
properly designed local relation models to avoid input entangling, which helps
ensure the interpretability of the proof paths. The proposed model has built-in
interpretability and shows superior capability in monotonicity inference,
systematic generalization, and interpretability, compared to previous models on
the existing datasets.
Related papers
- Neural Interpretable Reasoning [12.106771300842945]
We formalize a novel modeling framework for achieving interpretability in deep learning.
We show that this complexity can be mitigated by treating interpretability as a Markovian property.
We propose a new modeling paradigm -- neural generation and interpretable execution.
arXiv Detail & Related papers (2025-02-17T10:33:24Z) - Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.
Models may behave unreliably due to poorly explored failure modes.
causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - Neural DNF-MT: A Neuro-symbolic Approach for Learning Interpretable and Editable Policies [51.03989561425833]
We propose a neuro-symbolic approach called neural DNF-MT for end-to-end policy learning.
The differentiable nature of the neural DNF-MT model enables the use of deep actor-critic algorithms for training.
We show how the bivalent representations of deterministic policies can be edited and incorporated back into a neural model.
arXiv Detail & Related papers (2025-01-07T15:51:49Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Towards credible visual model interpretation with path attribution [24.86176236641865]
path attribution framework stands out among the post-hoc model interpretation tools due to its axiomatic nature.
Recent developments show that this framework can still suffer from counter-intuitive results.
We devise a scheme to preclude the conditions in which visual model interpretation can invalidate the axiomatic properties of path attribution.
arXiv Detail & Related papers (2023-05-23T06:23:08Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Modeling Implicit Bias with Fuzzy Cognitive Maps [0.0]
This paper presents a Fuzzy Cognitive Map model to quantify implicit bias in structured datasets.
We introduce a new reasoning mechanism equipped with a normalization-like transfer function that prevents neurons from saturating.
arXiv Detail & Related papers (2021-12-23T17:04:12Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Learning to Rationalize for Nonmonotonic Reasoning with Distant
Supervision [44.32874972577682]
We investigate the extent to which neural models can reason about natural language rationales that explain model predictions.
We use pre-trained language models, neural knowledge models, and distant supervision from related tasks.
Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information.
arXiv Detail & Related papers (2020-12-14T23:50:20Z) - Exploring End-to-End Differentiable Natural Logic Modeling [21.994060519995855]
We explore end-to-end trained differentiable models that integrate natural logic with neural networks.
The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information.
arXiv Detail & Related papers (2020-11-08T18:18:15Z) - Learning Causal Semantic Representation for Out-of-Distribution
Prediction [125.38836464226092]
We propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately.
We show that CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error.
arXiv Detail & Related papers (2020-11-03T13:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.