Abstract Reasoning via Logic-guided Generation
- URL: http://arxiv.org/abs/2107.10493v1
- Date: Thu, 22 Jul 2021 07:28:24 GMT
- Title: Abstract Reasoning via Logic-guided Generation
- Authors: Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
- Abstract summary: Abstract reasoning, i.e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
This paper aims to design a framework for the latter approach and bridge the gap between artificial and human intelligence.
We propose logic-guided generation (LoGe), a novel generative DNN framework that reduces abstract reasoning as an optimization problem in propositional logic.
- Score: 65.92805601327649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abstract reasoning, i.e., inferring complicated patterns from given
observations, is a central building block of artificial general intelligence.
While humans find the answer by either eliminating wrong candidates or first
constructing the answer, prior deep neural network (DNN)-based methods focus on
the former discriminative approach. This paper aims to design a framework for
the latter approach and bridge the gap between artificial and human
intelligence. To this end, we propose logic-guided generation (LoGe), a novel
generative DNN framework that reduces abstract reasoning as an optimization
problem in propositional logic. LoGe is composed of three steps: extract
propositional variables from images, reason the answer variables with a logic
layer, and reconstruct the answer image from the variables. We demonstrate that
LoGe outperforms the black box DNN frameworks for generative abstract reasoning
under the RAVEN benchmark, i.e., reconstructing answers based on capturing
correct rules of various attributes from observations.
Related papers
- Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models [84.15513004135576]
Current research enhances the reasoning performance of Large Language Models (LLMs) by sampling multiple reasoning chains and ensembling based on the answer frequency.
This approach fails in scenarios where the correct answers are in the minority.
We introduce a hierarchical reasoning aggregation framework AoR, which selects answers based on the evaluation of reasoning chains.
arXiv Detail & Related papers (2024-05-21T17:12:19Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - LAMBADA: Backward Chaining for Automated Reasoning in Natural Language [11.096348678079574]
Backward Chaining algorithm, called LAMBADA, decomposes reasoning into four sub-modules.
We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods.
arXiv Detail & Related papers (2022-12-20T18:06:03Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Joint Abductive and Inductive Neural Logical Reasoning [44.36651614420507]
We formulate the problem of the joint abductive and inductive neural logical reasoning (AI-NLR)
First, we incorporate description logic-based ontological axioms to provide the source of concepts.
Then, we represent concepts and queries as fuzzy sets, i.e., sets whose elements have degrees of membership, to bridge concepts and queries with entities.
arXiv Detail & Related papers (2022-05-29T07:41:50Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - LOREN: Logic Enhanced Neural Reasoning for Fact Verification [24.768868510218002]
We propose LOREN, a novel approach for fact verification that integrates Logic guided Reasoning and Neural inference.
Instead of directly validating a single reasoning unit, LOREN turns it into a question-answering task.
Experiments show that our proposed LOREN outperforms other previously published methods and achieves 73.43% of the FEVER score.
arXiv Detail & Related papers (2020-12-25T13:57:04Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.