Reasoning Circuits: Few-shot Multihop Question Generation with
Structured Rationales
- URL: http://arxiv.org/abs/2211.08466v1
- Date: Tue, 15 Nov 2022 19:36:06 GMT
- Title: Reasoning Circuits: Few-shot Multihop Question Generation with
Structured Rationales
- Authors: Saurabh Kulshreshtha and Anna Rumshisky
- Abstract summary: Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks.
We introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime.
- Score: 11.068901022944015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-hop Question Generation is the task of generating questions which
require the reader to reason over and combine information spread across
multiple passages using several reasoning steps. Chain-of-thought rationale
generation has been shown to improve performance on multi-step reasoning tasks
and make model predictions more interpretable. However, few-shot performance
gains from including rationales have been largely observed only in +100B
language models, and otherwise require large scale manual rationale annotation.
In this work, we introduce a new framework for applying chain-of-thought
inspired structured rationale generation to multi-hop question generation under
a very low supervision regime (8- to 128-shot). We propose to annotate a small
number of examples following our proposed multi-step rationale schema, treating
each reasoning step as a separate task to be performed by a generative language
model. We show that our framework leads to improved control over the difficulty
of the generated questions and better performance compared to baselines trained
without rationales, both on automatic evaluation metrics and in human
evaluation. Importantly, we show that this is achievable with a modest model
size.
Related papers
- Soft-Prompting with Graph-of-Thought for Multi-modal Representation Learning [45.517215214938844]
Chain-of-thought technique has been received well in multi-modal tasks.
We propose a novel Aggregation-Graph-of-Thought (AGoT) mechanism for soft-prompt tuning in multi-modal representation learning.
arXiv Detail & Related papers (2024-04-06T07:39:44Z) - PathFinder: Guided Search over Multi-Step Reasoning Paths [80.56102301441899]
We propose PathFinder, a tree-search-based reasoning path generation approach.
It enhances diverse branching and multi-hop reasoning through the integration of dynamic decoding.
Our model generalizes well to longer, unseen reasoning chains, reflecting similar complexities to beam search with large branching factors.
arXiv Detail & Related papers (2023-12-08T17:05:47Z) - Boosting the Power of Small Multimodal Reasoning Models to Match Larger Models with Self-Consistency Training [49.3242278912771]
Multimodal reasoning is a challenging task that requires models to reason across multiple modalities to answer questions.
Existing approaches have made progress by incorporating language and visual modalities into a two-stage reasoning framework.
We propose MC-CoT, a self-consistency training strategy that generates multiple rationales and answers, subsequently selecting the most accurate through a voting process.
arXiv Detail & Related papers (2023-11-23T17:09:48Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Teaching Smaller Language Models To Generalise To Unseen Compositional
Questions [6.9076450524134145]
We propose a combination of multitask pretraining on up to 93 tasks designed to instill diverse reasoning abilities.
We show that performance can be significantly improved by adding retrieval-augmented training datasets.
arXiv Detail & Related papers (2023-08-02T05:00:12Z) - HOP, UNION, GENERATE: Explainable Multi-hop Reasoning without Rationale
Supervision [118.0818807474809]
This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision.
Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document.
arXiv Detail & Related papers (2023-05-23T16:53:49Z) - Chaining Simultaneous Thoughts for Numerical Reasoning [92.2007997126144]
numerical reasoning over text should be an essential skill of AI systems.
Previous work focused on modeling the structures of equations, and has proposed various structured decoders.
We propose CANTOR, a numerical reasoner that models reasoning steps using a directed acyclic graph.
arXiv Detail & Related papers (2022-11-29T18:52:06Z) - Interlock-Free Multi-Aspect Rationalization for Text Classification [33.33452117387646]
We show that we address the interlocking problem in the multi-aspect setting.
We propose a multi-stage training method incorporating an additional self-supervised contrastive loss.
Empirical results on the beer review dataset show that our method improves significantly the rationalization performance.
arXiv Detail & Related papers (2022-05-13T16:38:38Z) - Graph-based Multi-hop Reasoning for Long Text Generation [66.64743847850666]
MRG consists of twoparts, a graph-based multi-hop reasoning module and a path-aware sentence realization module.
Unlike previous black-box models, MRG explicitly infers the skeleton path, which provides explanatory views tounderstand how the proposed model works.
arXiv Detail & Related papers (2020-09-28T12:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.