MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language
Models
- URL: http://arxiv.org/abs/2310.12426v1
- Date: Thu, 19 Oct 2023 02:32:39 GMT
- Title: MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language
Models
- Authors: Deepak Nathani, David Wang, Liangming Pan, William Yang Wang
- Abstract summary: Language Models (LMs) have shown impressive performance in various natural language tasks.
When it comes to natural language reasoning, LMs still face challenges such as hallucination, generating incorrect intermediate reasoning steps, and making mathematical errors.
Recent research has focused on enhancing LMs through self-improvement using feedback.
In this work, we propose Multi-Aspect Feedback, an iterative refinement framework that integrates multiple feedback modules, including frozen LMs and external tools, each focusing on a specific error category.
- Score: 64.70153487607172
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language Models (LMs) have shown impressive performance in various natural
language tasks. However, when it comes to natural language reasoning, LMs still
face challenges such as hallucination, generating incorrect intermediate
reasoning steps, and making mathematical errors. Recent research has focused on
enhancing LMs through self-improvement using feedback. Nevertheless, existing
approaches relying on a single generic feedback source fail to address the
diverse error types found in LM-generated reasoning chains. In this work, we
propose Multi-Aspect Feedback, an iterative refinement framework that
integrates multiple feedback modules, including frozen LMs and external tools,
each focusing on a specific error category. Our experimental results
demonstrate the efficacy of our approach to addressing several errors in the
LM-generated reasoning chain and thus improving the overall performance of an
LM in several reasoning tasks. We see a relative improvement of up to 20% in
Mathematical Reasoning and up to 18% in Logical Entailment.
Related papers
- Look Before You Decide: Prompting Active Deduction of MLLMs for Assumptive Reasoning [68.83624133567213]
We show that most prevalent MLLMs can be easily fooled by the introduction of a presupposition into the question.
We also propose a simple yet effective method, Active Deduction (AD), to encourage the model to actively perform composite deduction.
arXiv Detail & Related papers (2024-04-19T15:53:27Z) - Can Small Language Models Help Large Language Models Reason Better?: LM-Guided Chain-of-Thought [51.240387516059535]
We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., 1B) language model (LM) for guiding a black-box large (i.e., >10B) LM in reasoning tasks.
We optimize the model through 1) knowledge distillation and 2) reinforcement learning from rationale-oriented and task-oriented reward signals.
arXiv Detail & Related papers (2024-04-04T12:46:37Z) - Learning From Correctness Without Prompting Makes LLM Efficient Reasoner [30.203952806009717]
Large language models (LLMs) have demonstrated outstanding performance across various tasks, yet they still exhibit limitations such as hallucination, unfaithful reasoning, and toxic content.
We introduce an intrinsic self-correct reasoning framework for LLMs that eliminates the need for human feedback, external tools, and handcraft prompts.
arXiv Detail & Related papers (2024-03-28T02:12:49Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Zero-Shot Question Answering over Financial Documents using Large
Language Models [0.18749305679160366]
We introduce a large language model (LLM) based approach to answer complex questions requiring multi-hop numerical reasoning over financial reports.
We use novel zero-shot prompts that guide the LLM to encode the required reasoning into a Python program or a domain specific language.
arXiv Detail & Related papers (2023-11-19T16:23:34Z) - LLMRefine: Pinpointing and Refining Large Language Models via Fine-Grained Actionable Feedback [65.84061725174269]
Recent large language models (LLM) are leveraging human feedback to improve their generation quality.
We propose LLMRefine, an inference time optimization method to refine LLM's output.
We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization.
LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
arXiv Detail & Related papers (2023-11-15T19:52:11Z) - Noisy Exemplars Make Large Language Models More Robust: A
Domain-Agnostic Behavioral Analysis [10.06218778776515]
We introduce a systematic approach to test the robustness of large language models (LLMs) in multi-hop reasoning tasks via domain-agnostic perturbations.
We find that models are more sensitive to certain perturbations such as replacing words with their synonyms.
We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.
arXiv Detail & Related papers (2023-11-01T03:15:05Z) - oLMpics -- On what Language Model Pre-training Captures [84.60594612120173]
We propose eight reasoning tasks, which require operations such as comparison, conjunction, and composition.
A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data.
arXiv Detail & Related papers (2019-12-31T12:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.