GMH: A General Multi-hop Reasoning Model for KG Completion
- URL: http://arxiv.org/abs/2010.07620v3
- Date: Thu, 2 Sep 2021 07:49:24 GMT
- Title: GMH: A General Multi-hop Reasoning Model for KG Completion
- Authors: Yao Zhang, Hongru Liang, Adam Jatowt, Wenqiang Lei, Xin Wei, Ning
Jiang, Zhenglu Yang
- Abstract summary: Current models typically perform short distance reasoning.
Long-distance reasoning is also vital with the ability to connect the superficially unrelated entities.
We propose a general model which resolves the issues with three modules.
- Score: 37.01406934111068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graphs are essential for numerous downstream natural language
processing applications, but are typically incomplete with many facts missing.
This results in research efforts on multi-hop reasoning task, which can be
formulated as a search process and current models typically perform short
distance reasoning. However, the long-distance reasoning is also vital with the
ability to connect the superficially unrelated entities. To the best of our
knowledge, there lacks a general framework that approaches multi-hop reasoning
in mixed long-short distance reasoning scenarios. We argue that there are two
key issues for a general multi-hop reasoning model: i) where to go, and ii)
when to stop. Therefore, we propose a general model which resolves the issues
with three modules: 1) the local-global knowledge module to estimate the
possible paths, 2) the differentiated action dropout module to explore a
diverse set of paths, and 3) the adaptive stopping search module to avoid over
searching. The comprehensive results on three datasets demonstrate the
superiority of our model with significant improvements against baselines in
both short and long distance reasoning scenarios.
Related papers
- To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks [56.11584171938381]
Theory of Mind (ToM) assesses whether models can infer hidden mental states such as beliefs, desires, and intentions.<n>Recent progress in Large Reasoning Models (LRMs) has boosted step-by-step inference in mathematics and coding.<n>We present a systematic study of nine advanced Large Language Models (LLMs) comparing reasoning models with non-reasoning models.
arXiv Detail & Related papers (2026-02-11T08:16:13Z) - Knowledge Graphs are Implicit Reward Models: Path-Derived Signals Enable Compositional Reasoning [4.464939140209426]
We propose a bottom-up learning paradigm in which models are grounded in axiomatic domain facts and compose them to solve complex, unseen tasks.<n>By deriving novel reward signals from knowledge graph paths, we provide verifiable, scalable, and grounded supervision.<n>Our experiments show that path-derived rewards act as a "compositional bridge", enabling our model to significantly outperform larger models.
arXiv Detail & Related papers (2026-01-21T16:38:59Z) - MIRAGE: Multi-hop Reasoning with Ambiguity Evaluation for Illusory Questions [25.695038634265]
Real-world Multi-hop Question Answering (QA) often involves ambiguity that is inseparable from the reasoning process itself.<n>This ambiguity creates a distinct challenge, where multiple reasoning paths emerge from a single question.<n>We introduce MultI-hop Reasoning with AmbiGuity Evaluation for Illusory Questions (MIRAGE) to analyze and evaluate this challenging intersection.
arXiv Detail & Related papers (2025-09-26T07:31:01Z) - Hop, Skip, and Overthink: Diagnosing Why Reasoning Models Fumble during Multi-Hop Analysis [3.711555701154055]
Reasoning models and their integration into practical AI chat bots have led to breakthroughs in solving advanced math, deep search, and extractive question answering problems.<n>Yet, a complete understanding of why these models hallucinate more than general purpose language models is missing.<n>In this study, we systematicallyexplore reasoning failures of contemporary language models on multi-hop question answering tasks.
arXiv Detail & Related papers (2025-08-06T17:58:36Z) - Socratic-MCTS: Test-Time Visual Reasoning by Asking the Right Questions [100.41062461003389]
We show that framing reasoning as a search process helps the model "connect the dots" between fragmented knowledge and produce extended reasoning traces in non-reasoning models.<n>We evaluate our method across three benchmarks and observe consistent improvements.
arXiv Detail & Related papers (2025-06-10T15:51:16Z) - Self-Critique Guided Iterative Reasoning for Multi-hop Question Answering [24.446222685949227]
Large language models (LLMs) face challenges in knowledge-intensive multi-hop reasoning.<n>We propose Self-Critique Guided Iterative Reasoning (SiGIR)<n>SiGIR uses self-critique feedback to guide the iterative reasoning process.
arXiv Detail & Related papers (2025-05-25T12:10:24Z) - Reasoning Large Language Model Errors Arise from Hallucinating Critical Problem Features [0.0]
We test o1-mini, o3-mini, DeepSeek-R1, Claude 3.7 Sonnet, Gemini 2.5 Pro Preview, and Grok 3 Mini Beta on graph coloring as a variable-complexity constraint-satisfaction logic problem.<n>We find evidence from both error rate comparisons and CoT/explanation text analysis that RLLMs are prone to hallucinate edges not specified in the prompt's description of the graph.
arXiv Detail & Related papers (2025-05-17T21:55:12Z) - ShorterBetter: Guiding Reasoning Models to Find Optimal Inference Length for Efficient Reasoning [1.170732359523702]
Reasoning models such as OpenAI o3 and DeepSeek-R1 have demonstrated strong performance on reasoning-intensive tasks.
Long reasoning traces can facilitate a more thorough exploration of solution paths for complex problems.
We introduce ShorterBetter, a simple yet effective reinforcement learning methed that enables reasoning language models to discover their own optimal CoT lengths.
arXiv Detail & Related papers (2025-04-30T07:04:19Z) - How Do LLMs Perform Two-Hop Reasoning in Context? [76.79936191530784]
Two-hop reasoning refers to the process of inferring a conclusion by making two logical steps.<n>Despite recent progress in large language models (LLMs), we surprisingly find that they can fail at solving simple two-hop reasoning problems.<n>We train a 3-layer Transformer from scratch on a synthetic two-hop reasoning task and reverse-engineer its internal information flow.
arXiv Detail & Related papers (2025-02-19T17:46:30Z) - The Jumping Reasoning Curve? Tracking the Evolution of Reasoning Performance in GPT-[n] and o-[n] Models on Multimodal Puzzles [29.214813685163218]
OpenAI's releases of o1 and o3 mark a paradigm shift in Large Language Models towards advanced reasoning capabilities.
We track the evolution of the GPT-[n] and o-[n] series models on challenging multimodal puzzles.
The superior performance of o1 comes at nearly 750 times the computational cost of GPT-4o, raising concerns about its efficiency.
arXiv Detail & Related papers (2025-02-03T05:47:04Z) - Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths [69.39559168050923]
We introduce Reasoning Paths Optimization (RPO), which enables learning to reason and explore from diverse paths.
Our approach encourages favorable branches at each reasoning step while penalizing unfavorable ones, enhancing the model's overall problem-solving performance.
We focus on multi-step reasoning tasks, such as math word problems and science-based exam questions.
arXiv Detail & Related papers (2024-10-07T06:37:25Z) - ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life
Videos [53.92440577914417]
ACQUIRED consists of 3.9K annotated videos, encompassing a wide range of event types and incorporating both first and third-person viewpoints.
Each video is annotated with questions that span three distinct dimensions of reasoning, including physical, social, and temporal.
We benchmark our dataset against several state-of-the-art language-only and multimodal models and experimental results demonstrate a significant performance gap.
arXiv Detail & Related papers (2023-11-02T22:17:03Z) - Getting MoRE out of Mixture of Language Model Reasoning Experts [71.61176122960464]
We propose a Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diverse specialized language models.
We specialize the backbone language model with prompts optimized for different reasoning categories, including factual, multihop, mathematical, and commonsense reasoning.
Our human study confirms that presenting expert predictions and the answer selection process helps annotators more accurately calibrate when to trust the system's output.
arXiv Detail & Related papers (2023-05-24T02:00:51Z) - Measuring and Narrowing the Compositionality Gap in Language Models [116.5228850227024]
We measure how often models can correctly answer all sub-problems but not generate the overall solution.
We present a new method, self-ask, that further improves on chain of thought.
arXiv Detail & Related papers (2022-10-07T06:50:23Z) - Faithful Reasoning Using Large Language Models [12.132449274592668]
We show how LMs can be made to perform faithful multi-step reasoning via a process whose causal structure mirrors the underlying logical structure of the problem.
Our approach works by chaining together reasoning steps, where each step results from calls to two fine-tuned LMs.
We demonstrate the effectiveness of our model on multi-step logical deduction and scientific question-answering, showing that it outperforms baselines on final answer accuracy.
arXiv Detail & Related papers (2022-08-30T13:44:41Z) - Is Multi-Hop Reasoning Really Explainable? Towards Benchmarking
Reasoning Interpretability [33.220997121043965]
We propose a unified framework to quantitatively evaluate the interpretability of multi-hop reasoning models.
In specific, we define three metrics including path recall, local interpretability, and global interpretability for evaluation.
Results show that the interpretability of current multi-hop reasoning models is less satisfactory and is still far from the upper bound given by our benchmark.
arXiv Detail & Related papers (2021-04-14T10:12:05Z) - Graph-based Multi-hop Reasoning for Long Text Generation [66.64743847850666]
MRG consists of twoparts, a graph-based multi-hop reasoning module and a path-aware sentence realization module.
Unlike previous black-box models, MRG explicitly infers the skeleton path, which provides explanatory views tounderstand how the proposed model works.
arXiv Detail & Related papers (2020-09-28T12:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.