Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical
Explanations
- URL: http://arxiv.org/abs/2011.13354v4
- Date: Sun, 5 Dec 2021 02:34:30 GMT
- Title: Braid: Weaving Symbolic and Neural Knowledge into Coherent Logical
Explanations
- Authors: Aditya Kalyanpur, Tom Breloff, David Ferrucci
- Abstract summary: Braid is a novel logical reasoner that supports probabilistic rules.
We describe the reasoning algorithms used in Braid, and their implementation in a distributed task-based framework.
We evaluate Braid on the ROC Story Cloze test and achieve close to state-of-the-art results.
- Score: 0.9023847175654603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional symbolic reasoning engines, while attractive for their precision
and explicability, have a few major drawbacks: the use of brittle inference
procedures that rely on exact matching (unification) of logical terms, an
inability to deal with uncertainty, and the need for a precompiled rule-base of
knowledge (the "knowledge acquisition" problem). To address these issues, we
devise a novel logical reasoner called Braid, that supports probabilistic
rules, and uses the notion of custom unification functions and dynamic rule
generation to overcome the brittle matching and knowledge-gap problem prevalent
in traditional reasoners. In this paper, we describe the reasoning algorithms
used in Braid, and their implementation in a distributed task-based framework
that builds proof/explanation graphs for an input query. We use a simple QA
example from a children's story to motivate Braid's design and explain how the
various components work together to produce a coherent logical explanation.
Finally, we evaluate Braid on the ROC Story Cloze test and achieve close to
state-of-the-art results while providing frame-based explanations.
Related papers
- Neural Probabilistic Logic Learning for Knowledge Graph Reasoning [10.473897846826956]
This paper aims to design a reasoning framework that achieves accurate reasoning on knowledge graphs.
We introduce a scoring module that effectively enhances the expressive power of embedding networks.
We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference.
arXiv Detail & Related papers (2024-07-04T07:45:46Z) - Semantic Objective Functions: A distribution-aware method for adding logical constraints in deep learning [4.854297874710511]
Constrained Learning and Knowledge Distillation techniques have shown promising results.
We propose a loss-based method that embeds knowledge-enforces logical constraints into a machine learning model.
We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints.
arXiv Detail & Related papers (2024-05-03T19:21:47Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Boosting Language Models Reasoning with Chain-of-Knowledge Prompting [18.326858925174605]
Chain-of-Knowledge (CoK) prompting aims at eliciting explicit pieces of knowledge evidence in the form of structure triple.
Benefiting from CoK, we additionally introduce a F2-Verification method to estimate the reliability of the reasoning chains.
Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.
arXiv Detail & Related papers (2023-06-10T12:42:36Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Discourse-Aware Graph Networks for Textual Logical Reasoning [142.0097357999134]
Passage-level logical relations represent entailment or contradiction between propositional units (e.g., a concluding sentence)
We propose logic structural-constraint modeling to solve the logical reasoning QA and introduce discourse-aware graph networks (DAGNs)
The networks first construct logic graphs leveraging in-line discourse connectives and generic logic theories, then learn logic representations by end-to-end evolving the logic relations with an edge-reasoning mechanism and updating the graph features.
arXiv Detail & Related papers (2022-07-04T14:38:49Z) - Joint Abductive and Inductive Neural Logical Reasoning [44.36651614420507]
We formulate the problem of the joint abductive and inductive neural logical reasoning (AI-NLR)
First, we incorporate description logic-based ontological axioms to provide the source of concepts.
Then, we represent concepts and queries as fuzzy sets, i.e., sets whose elements have degrees of membership, to bridge concepts and queries with entities.
arXiv Detail & Related papers (2022-05-29T07:41:50Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.