SCREWS: A Modular Framework for Reasoning with Revisions
- URL: http://arxiv.org/abs/2309.13075v1
- Date: Wed, 20 Sep 2023 15:59:54 GMT
- Title: SCREWS: A Modular Framework for Reasoning with Revisions
- Authors: Kumar Shridhar, Harsh Jhamtani, Hao Fang, Benjamin Van Durme, Jason
Eisner, Patrick Xia
- Abstract summary: We present SCREWS, a modular framework for reasoning with revisions.
We show that SCREWS unifies several previous approaches under a common framework.
We evaluate our framework with state-of-the-art LLMs on a diverse set of reasoning tasks.
- Score: 58.698199183147935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) can improve their accuracy on various tasks
through iteratively refining and revising their output based on feedback. We
observe that these revisions can introduce errors, in which case it is better
to roll back to a previous result. Further, revisions are typically
homogeneous: they use the same reasoning method that produced the initial
answer, which may not correct errors. To enable exploration in this space, we
present SCREWS, a modular framework for reasoning with revisions. It is
comprised of three main modules: Sampling, Conditional Resampling, and
Selection, each consisting of sub-modules that can be hand-selected per task.
We show that SCREWS not only unifies several previous approaches under a common
framework, but also reveals several novel strategies for identifying improved
reasoning chains. We evaluate our framework with state-of-the-art LLMs (ChatGPT
and GPT-4) on a diverse set of reasoning tasks and uncover useful new reasoning
strategies for each: arithmetic word problems, multi-hop question answering,
and code debugging. Heterogeneous revision strategies prove to be important, as
does selection between original and revised candidates.
Related papers
- Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading Comprehension [9.67774998354062]
Previous research has primarily focused on enhancing logical reasoning capabilities through Chain-of-Thought (CoT) or data augmentation.
We propose a Premise-Oriented Data Augmentation (PODA) framework to generate CoT rationales including analyses for both correct and incorrect options.
We also introduce a novel thought-path contrastive learning method that compares reasoning paths between the original and counterfactual samples.
arXiv Detail & Related papers (2024-09-22T15:44:43Z) - Recursive Introspection: Teaching Language Model Agents How to Self-Improve [30.086494067593268]
We develop RISE: Recursive IntroSpEction, an approach for fine-tuning large language models.
Our experiments show that RISE enables Llama2, Llama3, and Mistral models to improve themselves with more turns on math reasoning tasks.
arXiv Detail & Related papers (2024-07-25T17:35:59Z) - RaFe: Ranking Feedback Improves Query Rewriting for RAG [83.24385658573198]
We propose a framework for training query rewriting models free of annotations.
By leveraging a publicly available reranker, oursprovides feedback aligned well with the rewriting objectives.
arXiv Detail & Related papers (2024-05-23T11:00:19Z) - Continual Referring Expression Comprehension via Dual Modular
Memorization [133.46886428655426]
Referring Expression (REC) aims to localize an image region of a given object described by a natural-language expression.
Existing REC algorithms make a strong assumption that training data feeding into a model are given upfront, which degrades its practicality for real-world scenarios.
In this paper, we propose Continual Referring Expression (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks.
In order to continuously improve the model on sequential tasks without forgetting prior learned knowledge and without repeatedly re-training from a scratch, we propose an effective baseline method named Dual Modular Memorization
arXiv Detail & Related papers (2023-11-25T02:58:51Z) - Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts [65.15322403136238]
We propose XoT, an integrated problem solving framework by prompting LLMs with diverse reasoning thoughts.
For each question, XoT always begins with selecting the most suitable method then executes each method iteratively.
Within each iteration, XoT actively checks the validity of the generated answer and incorporates the feedback from external executors.
arXiv Detail & Related papers (2023-10-23T07:02:20Z) - Fill in the Blank: Exploring and Enhancing LLM Capabilities for Backward Reasoning in Math Word Problems [17.80128896525717]
backward reasoning is relatively unexplored.
backward reasoning can be seen as the ''inverse'' of forward reasoning.
We propose variations of three different forward reasoning strategies to improve performance.
arXiv Detail & Related papers (2023-10-03T12:03:06Z) - Re-Reading Improves Reasoning in Large Language Models [87.46256176508376]
We introduce a simple, yet general and effective prompting method, Re2, to enhance the reasoning capabilities of off-the-shelf Large Language Models (LLMs)
Unlike most thought-eliciting prompting methods, such as Chain-of-Thought (CoT), Re2 shifts the focus to the input by processing questions twice, thereby enhancing the understanding process.
We evaluate Re2 on extensive reasoning benchmarks across 14 datasets, spanning 112 experiments, to validate its effectiveness and generality.
arXiv Detail & Related papers (2023-09-12T14:36:23Z) - To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support [20.905660642919052]
We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
arXiv Detail & Related papers (2023-05-26T10:19:54Z) - RCOT: Detecting and Rectifying Factual Inconsistency in Reasoning by
Reversing Chain-of-Thought [56.558892336235914]
Reversing Chain-of-Thought (RCoT) is a novel method to improve large language models' reasoning abilities.
RCoT automatically detects and rectifys factual inconsistency in generated solutions.
We show that manually written fine-grained feedback can dramatically improve LLMs' reasoning abilities.
arXiv Detail & Related papers (2023-05-19T08:02:52Z) - Selection-Inference: Exploiting Large Language Models for Interpretable
Logical Reasoning [14.663216851932646]
We show that language models tend to perform fairly well at single step inference tasks, but struggle to chain together multiple reasoning steps to solve more complex problems.
We propose a Selection-Inference (SI) framework that exploits pre-trained LLMs as general processing modules.
We show that a 7B parameter LLM used within the SI framework in a 5-shot generalisation setting, with no fine-tuning, yields a performance improvement of over 100%.
arXiv Detail & Related papers (2022-05-19T17:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.