METAL: Metamorphic Testing Framework for Analyzing Large-Language Model
Qualities
- URL: http://arxiv.org/abs/2312.06056v1
- Date: Mon, 11 Dec 2023 01:29:19 GMT
- Title: METAL: Metamorphic Testing Framework for Analyzing Large-Language Model
Qualities
- Authors: Sangwon Hyun, Mingyu Guo, M. Ali Babar
- Abstract summary: Large-Language Models (LLMs) have shifted the paradigm of natural language data processing.
Recent studies have tested Quality Attributes (QAs) of LLMs by generating adversarial input texts.
We propose a MEtamorphic Testing for Analyzing LLMs (METAL) framework to address these issues.
- Score: 4.493507573183107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-Language Models (LLMs) have shifted the paradigm of natural language
data processing. However, their black-boxed and probabilistic characteristics
can lead to potential risks in the quality of outputs in diverse LLM
applications. Recent studies have tested Quality Attributes (QAs), such as
robustness or fairness, of LLMs by generating adversarial input texts. However,
existing studies have limited their coverage of QAs and tasks in LLMs and are
difficult to extend. Additionally, these studies have only used one evaluation
metric, Attack Success Rate (ASR), to assess the effectiveness of their
approaches. We propose a MEtamorphic Testing for Analyzing LLMs (METAL)
framework to address these issues by applying Metamorphic Testing (MT)
techniques. This approach facilitates the systematic testing of LLM qualities
by defining Metamorphic Relations (MRs), which serve as modularized evaluation
metrics. The METAL framework can automatically generate hundreds of MRs from
templates that cover various QAs and tasks. In addition, we introduced novel
metrics that integrate the ASR method into the semantic qualities of text to
assess the effectiveness of MRs accurately. Through the experiments conducted
with three prominent LLMs, we have confirmed that the METAL framework
effectively evaluates essential QAs on primary LLM tasks and reveals the
quality risks in LLMs. Moreover, the newly proposed metrics can guide the
optimal MRs for testing each task and suggest the most effective method for
generating MRs.
Related papers
- Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [85.51252685938564]
Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML)
As with other ML models, large language models (LLMs) are prone to make incorrect predictions, hallucinate'' by fabricating claims, or simply generate low-quality output for a given input.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [53.6472920229013]
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks.
LLMs are prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning.
We introduce Q*, a framework for guiding LLMs decoding process with deliberative planning.
arXiv Detail & Related papers (2024-06-20T13:08:09Z) - A Survey of Useful LLM Evaluation [20.048914787813263]
Two-stage framework: from core ability'' to agent''
In the "core ability" stage, we discussed the reasoning ability, societal impact, and domain knowledge of LLMs.
In the agent'' stage, we demonstrated embodied action, planning, and tool learning of LLMs agent applications.
arXiv Detail & Related papers (2024-06-03T02:20:03Z) - RepEval: Effective Text Evaluation with LLM Representation [54.07909112633993]
We introduce RepEval, the first metric leveraging the projection of LLM representations for evaluation.
RepEval requires minimal sample pairs for training, and through simple prompt modifications, it can easily transition to various tasks.
Results on ten datasets from three tasks demonstrate the high effectiveness of our method.
arXiv Detail & Related papers (2024-04-30T13:50:55Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Beyond the Answers: Reviewing the Rationality of Multiple Choice Question Answering for the Evaluation of Large Language Models [29.202758753639078]
This study investigates the limitations of Multiple Choice Question Answering (MCQA) as an evaluation method for Large Language Models (LLMs)
We propose a dataset augmenting method for Multiple-Choice Questions (MCQs), MCQA+, that can more accurately reflect the performance of the model.
arXiv Detail & Related papers (2024-02-02T12:07:00Z) - MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria [44.401826163314716]
We propose a new evaluation paradigm for MLLMs using potent MLLM as the judge.
We benchmark 21 popular MLLMs in a pairwise-comparison fashion, showing diverse performance across models.
The validity of our benchmark manifests itself in reaching 88.02% agreement with human evaluation.
arXiv Detail & Related papers (2023-11-23T12:04:25Z) - ChEF: A Comprehensive Evaluation Framework for Standardized Assessment
of Multimodal Large Language Models [49.48109472893714]
Multimodal Large Language Models (MLLMs) have shown impressive abilities in interacting with visual content with myriad potential downstream tasks.
We present the first Comprehensive Evaluation Framework (ChEF) that can holistically profile each MLLM and fairly compare different MLLMs.
We will publicly release all the detailed implementations for further analysis, as well as an easy-to-use modular toolkit for the integration of new recipes and models.
arXiv Detail & Related papers (2023-11-05T16:01:40Z) - Through the Lens of Core Competency: Survey on Evaluation of Large
Language Models [27.271533306818732]
Large language model (LLM) has excellent performance and wide practical uses.
Existing evaluation tasks are difficult to keep up with the wide range of applications in real-world scenarios.
We summarize 4 core competencies of LLM, including reasoning, knowledge, reliability, and safety.
Under this competency architecture, similar tasks are combined to reflect corresponding ability, while new tasks can also be easily added into the system.
arXiv Detail & Related papers (2023-08-15T17:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.