Abstract Reasoning via Logic-guided Generation
- URL: http://arxiv.org/abs/2107.10493v1
- Date: Thu, 22 Jul 2021 07:28:24 GMT
- Title: Abstract Reasoning via Logic-guided Generation
- Authors: Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
- Abstract summary: Abstract reasoning, i.e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
This paper aims to design a framework for the latter approach and bridge the gap between artificial and human intelligence.
We propose logic-guided generation (LoGe), a novel generative DNN framework that reduces abstract reasoning as an optimization problem in propositional logic.
- Score: 65.92805601327649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abstract reasoning, i.e., inferring complicated patterns from given
observations, is a central building block of artificial general intelligence.
While humans find the answer by either eliminating wrong candidates or first
constructing the answer, prior deep neural network (DNN)-based methods focus on
the former discriminative approach. This paper aims to design a framework for
the latter approach and bridge the gap between artificial and human
intelligence. To this end, we propose logic-guided generation (LoGe), a novel
generative DNN framework that reduces abstract reasoning as an optimization
problem in propositional logic. LoGe is composed of three steps: extract
propositional variables from images, reason the answer variables with a logic
layer, and reconstruct the answer image from the variables. We demonstrate that
LoGe outperforms the black box DNN frameworks for generative abstract reasoning
under the RAVEN benchmark, i.e., reconstructing answers based on capturing
correct rules of various attributes from observations.
Related papers
- Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and Amendment [54.62926010621013]
We introduce a novel task, code reasoning, to provide a new perspective for the reasoning abilities of large language models.
We summarize three meta-benchmarks based on established forms of logical reasoning, and instantiate these into eight specific benchmark tasks.
We present a new pathway exploration pipeline inspired by human intricate problem-solving methods.
arXiv Detail & Related papers (2025-02-17T10:39:58Z) - Abductive Symbolic Solver on Abstraction and Reasoning Corpus [5.903948032748941]
Humans solve visual reasoning tasks based on their observations and hypotheses, and they can explain their solutions with a proper reason.
Previous approaches focused only on the grid transition and it is not enough for AI to provide reasonable and human-like solutions.
We propose a novel framework that symbolically represents the observed data into a knowledge graph and extracts core knowledge that can be used for solution generation.
arXiv Detail & Related papers (2024-11-27T09:09:00Z) - P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains [97.25943550933829]
We present P-FOLIO, a human-annotated dataset consisting of diverse and complex reasoning chains.
We use P-FOLIO to evaluate and improve large-language-model (LLM) reasoning capabilities.
arXiv Detail & Related papers (2024-10-11T19:22:57Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Joint Abductive and Inductive Neural Logical Reasoning [44.36651614420507]
We formulate the problem of the joint abductive and inductive neural logical reasoning (AI-NLR)
First, we incorporate description logic-based ontological axioms to provide the source of concepts.
Then, we represent concepts and queries as fuzzy sets, i.e., sets whose elements have degrees of membership, to bridge concepts and queries with entities.
arXiv Detail & Related papers (2022-05-29T07:41:50Z) - LOREN: Logic Enhanced Neural Reasoning for Fact Verification [24.768868510218002]
We propose LOREN, a novel approach for fact verification that integrates Logic guided Reasoning and Neural inference.
Instead of directly validating a single reasoning unit, LOREN turns it into a question-answering task.
Experiments show that our proposed LOREN outperforms other previously published methods and achieves 73.43% of the FEVER score.
arXiv Detail & Related papers (2020-12-25T13:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.