CRITIC: Large Language Models Can Self-Correct with Tool-Interactive
Critiquing
- URL: http://arxiv.org/abs/2305.11738v4
- Date: Wed, 21 Feb 2024 12:59:21 GMT
- Title: CRITIC: Large Language Models Can Self-Correct with Tool-Interactive
Critiquing
- Authors: Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan
Duan, Weizhu Chen
- Abstract summary: CRITIC allows large language models to validate and amend their own outputs in a manner similar to human interaction with tools.
Comprehensive evaluations involving free-form question answering, mathematical program synthesis, and toxicity reduction demonstrate that CRITIC consistently enhances the performance of LLMs.
- Score: 139.77117915309023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in large language models (LLMs) have been impressive.
However, these models sometimes show inconsistencies and problematic behavior,
such as hallucinating facts, generating flawed code, or creating offensive and
toxic content. Unlike these models, humans typically utilize external tools to
cross-check and refine their initial content, like using a search engine for
fact-checking, or a code interpreter for debugging. Inspired by this
observation, we introduce a framework called CRITIC that allows LLMs, which are
essentially "black boxes" to validate and progressively amend their own outputs
in a manner similar to human interaction with tools. More specifically,
starting with an initial output, CRITIC interacts with appropriate tools to
evaluate certain aspects of the text, and then revises the output based on the
feedback obtained during this validation process. Comprehensive evaluations
involving free-form question answering, mathematical program synthesis, and
toxicity reduction demonstrate that CRITIC consistently enhances the
performance of LLMs. Meanwhile, our research highlights the crucial importance
of external feedback in promoting the ongoing self-improvement of LLMs.
Related papers
- A linguistic analysis of undesirable outcomes in the era of generative AI [4.841442157674423]
We present a comprehensive simulation framework built upon the chat version of LLama2, focusing on the linguistic aspects of the generated content.
Our results show that the model produces less lexical rich content across generations, reducing diversity.
We find that autophagy transforms the initial model into a more creative, doubtful and confused one, which might provide inaccurate answers.
arXiv Detail & Related papers (2024-10-16T08:02:48Z) - Evaluating Generative Language Models in Information Extraction as Subjective Question Correction [49.729908337372436]
We propose a new evaluation method, SQC-Score.
Inspired by the principles in subjective question correction, we propose a new evaluation method, SQC-Score.
Results on three information extraction tasks show that SQC-Score is more preferred by human annotators than the baseline metrics.
arXiv Detail & Related papers (2024-04-04T15:36:53Z) - CLOMO: Counterfactual Logical Modification with Large Language Models [109.60793869938534]
We introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark.
In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship.
We propose an innovative evaluation metric, the Self-Evaluation Score (SES), to directly evaluate the natural language output of LLMs.
arXiv Detail & Related papers (2023-11-29T08:29:54Z) - N-Critics: Self-Refinement of Large Language Models with Ensemble of
Critics [5.516095889257118]
We propose a self-correction mechanism for Large Language Models (LLMs) to mitigate issues such as toxicity and fact hallucination.
This method involves refining model outputs through an ensemble of critics and the model's own feedback.
arXiv Detail & Related papers (2023-10-28T11:22:22Z) - Large Language Models Cannot Self-Correct Reasoning Yet [78.16697476530994]
Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities.
Concerns persist regarding the accuracy and appropriateness of their generated content.
A contemporary methodology, self-correction, has been proposed as a remedy to these issues.
arXiv Detail & Related papers (2023-10-03T04:56:12Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models [74.22729793816451]
Large Language Models (LLMs) have made significant progress in utilizing tools, but their ability is limited by API availability.
We propose CREATOR, a novel framework that enables LLMs to create their own tools using documentation and code realization.
We evaluate CREATOR on MATH and TabMWP benchmarks, respectively consisting of challenging math competition problems.
arXiv Detail & Related papers (2023-05-23T17:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.