LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond
- URL: http://arxiv.org/abs/2305.14540v1
- Date: Tue, 23 May 2023 21:50:06 GMT
- Title: LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond
- Authors: Philippe Laban, Wojciech Kry\'sci\'nski, Divyansh Agarwal, Alexander
R. Fabbri, Caiming Xiong, Shafiq Joty, Chien-Sheng Wu
- Abstract summary: We propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SummEdits.
Most LLMs struggle on SummEdits, with performance close to random chance.
The best-performing model, GPT-4, is still 8% below estimated human performance.
- Score: 135.8013388183257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the recent appearance of LLMs in practical settings, having methods that
can effectively detect factual inconsistencies is crucial to reduce the
propagation of misinformation and improve trust in model outputs. When testing
on existing factual consistency benchmarks, we find that a few large language
models (LLMs) perform competitively on classification benchmarks for factual
inconsistency detection compared to traditional non-LLM methods. However, a
closer analysis reveals that most LLMs fail on more complex formulations of the
task and exposes issues with existing evaluation benchmarks, affecting
evaluation precision. To address this, we propose a new protocol for
inconsistency detection benchmark creation and implement it in a 10-domain
benchmark called SummEdits. This new benchmark is 20 times more cost-effective
per sample than previous benchmarks and highly reproducible, as we estimate
inter-annotator agreement at about 0.9. Most LLMs struggle on SummEdits, with
performance close to random chance. The best-performing model, GPT-4, is still
8\% below estimated human performance, highlighting the gaps in LLMs' ability
to reason about facts and detect inconsistencies when they occur.
Related papers
- Towards Reproducible LLM Evaluation: Quantifying Uncertainty in LLM Benchmark Scores [2.886479348067378]
We use benchmarks designed for testing large language models' capacity to reason about cardinal directions.
We suggest a simple method for cost-effectively quantifying the uncertainty of a benchmark score.
arXiv Detail & Related papers (2024-10-04T15:04:28Z) - LLM Detectors Still Fall Short of Real World: Case of LLM-Generated Short News-Like Posts [7.680851067579922]
This paper focuses on an important setting in information operations -- short news-like posts generated by moderately sophisticated attackers.
We demonstrate that existing LLM detectors, whether zero-shot or purpose-trained, are not ready for real-world use in that setting.
A purpose-trained detector generalizing across LLMs and unseen attacks can be developed, but it fails to generalize to new human-written texts.
arXiv Detail & Related papers (2024-09-05T06:55:13Z) - PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models [41.772263447213234]
Large language models (LLMs) are known to be trained on vast amounts of data, which may unintentionally or intentionally include data from commonly used benchmarks.
This inclusion can lead to cheatingly high scores on model leaderboards, yet result in disappointing performance in real-world applications.
We introduce PaCoST, a Paired Confidence Significance Testing to effectively detect benchmark contamination in LLMs.
arXiv Detail & Related papers (2024-06-26T13:12:40Z) - DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph [70.79413606968814]
We introduce Dynamic Evaluation of LLMs via Adaptive Reasoning Graph Evolvement (DARG) to dynamically extend current benchmarks with controlled complexity and diversity.
Specifically, we first extract the reasoning graphs of data points in current benchmarks and then perturb the reasoning graphs to generate novel testing data.
Such newly generated test samples can have different levels of complexity while maintaining linguistic diversity similar to the original benchmarks.
arXiv Detail & Related papers (2024-06-25T04:27:53Z) - Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model Evaluation [61.350306618479365]
Leakage of benchmarks can prevent the accurate assessment of large language models' true performance.
We propose Inference-Time Decontamination (ITD) to address this issue.
ITD reduces inflated accuracy by 22.9% on GSM8K and 19.0% on MMLU.
arXiv Detail & Related papers (2024-06-20T04:35:59Z) - MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures [57.886592207948844]
We propose MixEval, a new paradigm for establishing efficient, gold-standard evaluation by strategically mixing off-the-shelf benchmarks.
It bridges (1) comprehensive and well-distributed real-world user queries and (2) efficient and fairly-graded ground-truth-based benchmarks, by matching queries mined from the web with similar queries from existing benchmarks.
arXiv Detail & Related papers (2024-06-03T05:47:05Z) - Self-Evaluation Improves Selective Generation in Large Language Models [54.003992911447696]
We reformulate open-ended generation tasks into token-level prediction tasks.
We instruct an LLM to self-evaluate its answers.
We benchmark a range of scoring methods based on self-evaluation.
arXiv Detail & Related papers (2023-12-14T19:09:22Z) - Investigating Data Contamination in Modern Benchmarks for Large Language Models [27.479260572913724]
Recent observations have underscored a disparity between the inflated benchmark scores and the actual performance of LLMs.
We study data contamination by proposing two methods tailored for both open-source and proprietary LLMs.
We find that certain commercial LLMs could surprisingly guess the missing option in various test sets.
arXiv Detail & Related papers (2023-11-16T11:03:04Z) - Assessing the Reliability of Large Language Model Knowledge [78.38870272050106]
Large language models (LLMs) have been treated as knowledge bases due to their strong performance in knowledge probing tasks.
How do we evaluate the capabilities of LLMs to consistently produce factually correct answers?
We propose MOdel kNowledge relIabiliTy scORe (MONITOR), a novel metric designed to directly measure LLMs' factual reliability.
arXiv Detail & Related papers (2023-10-15T12:40:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.