Critique Ability of Large Language Models
- URL: http://arxiv.org/abs/2310.04815v1
- Date: Sat, 7 Oct 2023 14:12:15 GMT
- Title: Critique Ability of Large Language Models
- Authors: Liangchen Luo, Zi Lin, Yinxiao Liu, Lei Shu, Yun Zhu, Jingbo Shang,
Lei Meng
- Abstract summary: This study explores the ability of large language models (LLMs) to deliver accurate critiques across various tasks.
We develop a benchmark called CriticBench, which comprises 3K high-quality natural language queries and corresponding model responses.
- Score: 38.34144195927209
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Critical thinking is essential for rational decision-making and
problem-solving. This skill hinges on the ability to provide precise and
reasoned critiques and is a hallmark of human intelligence. In the era of large
language models (LLMs), this study explores the ability of LLMs to deliver
accurate critiques across various tasks. We are interested in this topic as a
capable critic model could not only serve as a reliable evaluator, but also as
a source of supervised signals for model tuning. Particularly, if a model can
self-critique, it has the potential for autonomous self-improvement. To examine
this, we introduce a unified evaluation framework for assessing the critique
abilities of LLMs. We develop a benchmark called CriticBench, which comprises
3K high-quality natural language queries and corresponding model responses; and
annotate the correctness of these responses. The benchmark cover tasks such as
math problem-solving, code completion, and question answering. We evaluate
multiple LLMs on the collected dataset and our analysis reveals several
noteworthy insights: (1) Critique is generally challenging for most LLMs, and
this capability often emerges only when models are sufficiently large. (2) In
particular, self-critique is especially difficult. Even top-performing LLMs
struggle to achieve satisfactory performance. (3) Models tend to have lower
critique accuracy on problems where they are most uncertain. To this end, we
introduce a simple yet effective baseline named self-check, which leverages
self-critique to improve task performance for various models. We hope this
study serves as an initial exploration into understanding the critique
abilities of LLMs, and aims to inform future research, including the
development of more proficient critic models and the application of critiques
across diverse tasks.
Related papers
- AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models [95.09157454599605]
Large Language Models (LLMs) are becoming increasingly powerful, but they still exhibit significant but subtle weaknesses.
Traditional benchmarking approaches cannot thoroughly pinpoint specific model deficiencies.
We introduce a unified framework, AutoDetect, to automatically expose weaknesses in LLMs across various tasks.
arXiv Detail & Related papers (2024-06-24T15:16:45Z) - CriticBench: Benchmarking LLMs for Critique-Correct Reasoning [26.45110574463893]
CriticBench is a benchmark designed to assess Large Language Models' abilities to critique and rectify their reasoning.
We evaluate and dissect the performance of 17 LLMs in generation, critique, and correction reasoning.
arXiv Detail & Related papers (2024-02-22T18:59:02Z) - CriticBench: Evaluating Large Language Models as Critic [115.8286183749499]
CriticBench is a novel benchmark designed to comprehensively and reliably evaluate four key critique ability dimensions of Large Language Models (LLMs)
CriticBench encompasses nine diverse tasks, each assessing the LLMs' ability to critique responses at varying levels of quality granularity.
Our extensive evaluations of open-source and closed-source LLMs reveal intriguing relationships between the critique ability and tasks, response qualities, and model scales.
arXiv Detail & Related papers (2024-02-21T12:38:59Z) - The Generative AI Paradox on Evaluation: What It Can Solve, It May Not
Evaluate [17.77014177096838]
This paper explores the assumption that Large Language Models (LLMs) skilled in generation tasks are equally adept as evaluators.
We assess the performance of three LLMs and one open-source LM in Question-Answering (QA) and evaluation tasks using the TriviaQA dataset.
arXiv Detail & Related papers (2024-02-09T06:16:08Z) - The Critique of Critique [45.40025444461465]
We pioneer the critique of critique, termed MetaCritique, which builds specific quantification criteria.
We construct a meta-evaluation dataset covering 4 tasks involving human-written and LLM-generated critiques.
Experiments demonstrate that MetaCritique can achieve near-human performance.
arXiv Detail & Related papers (2024-01-09T12:20:41Z) - Large Language Models Cannot Self-Correct Reasoning Yet [78.16697476530994]
Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities.
Concerns persist regarding the accuracy and appropriateness of their generated content.
A contemporary methodology, self-correction, has been proposed as a remedy to these issues.
arXiv Detail & Related papers (2023-10-03T04:56:12Z) - Are Large Language Models Really Robust to Word-Level Perturbations? [68.60618778027694]
We propose a novel rational evaluation approach that leverages pre-trained reward models as diagnostic tools.
Longer conversations manifest the comprehensive grasp of language models in terms of their proficiency in understanding questions.
Our results demonstrate that LLMs frequently exhibit vulnerability to word-level perturbations that are commonplace in daily language usage.
arXiv Detail & Related papers (2023-09-20T09:23:46Z) - A Survey on Evaluation of Large Language Models [87.60417393701331]
Large language models (LLMs) are gaining increasing popularity in both academia and industry.
This paper focuses on three key dimensions: what to evaluate, where to evaluate, and how to evaluate.
arXiv Detail & Related papers (2023-07-06T16:28:35Z) - Self-critiquing models for assisting human evaluators [11.1006983438712]
We fine-tune large language models to write natural language critiques (natural language critical comments) using behavioral cloning.
On a topic-based summarization task, critiques written by our models help humans find flaws in summaries that they would have otherwise missed.
Larger models write more helpful critiques, and on most tasks, are better at self-critiquing, despite having harder-to-critique outputs.
arXiv Detail & Related papers (2022-06-12T17:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.