Defining and Evaluating Decision and Composite Risk in Language Models Applied to Natural Language Inference
- URL: http://arxiv.org/abs/2408.01935v1
- Date: Sun, 4 Aug 2024 05:24:32 GMT
- Title: Defining and Evaluating Decision and Composite Risk in Language Models Applied to Natural Language Inference
- Authors: Ke Shen, Mayank Kejriwal,
- Abstract summary: Large language models (LLMs) such as ChatGPT are known to pose important risks.
misplaced confidence arises from over-confidence or under-confidence, that the models have in their inference.
We propose an experimental framework consisting of a two-level inference architecture and appropriate metrics for measuring such risks.
- Score: 3.422309388045878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their impressive performance, large language models (LLMs) such as ChatGPT are known to pose important risks. One such set of risks arises from misplaced confidence, whether over-confidence or under-confidence, that the models have in their inference. While the former is well studied, the latter is not, leading to an asymmetry in understanding the comprehensive risk of the model based on misplaced confidence. In this paper, we address this asymmetry by defining two types of risk (decision and composite risk), and proposing an experimental framework consisting of a two-level inference architecture and appropriate metrics for measuring such risks in both discriminative and generative LLMs. The first level relies on a decision rule that determines whether the underlying language model should abstain from inference. The second level (which applies if the model does not abstain) is the model's inference. Detailed experiments on four natural language commonsense reasoning datasets using both an open-source ensemble-based RoBERTa model and ChatGPT, demonstrate the practical utility of the evaluation framework. For example, our results show that our framework can get an LLM to confidently respond to an extra 20.1% of low-risk inference tasks that other methods might misclassify as high-risk, and skip 19.8% of high-risk tasks, which would have been answered incorrectly.
Related papers
- Sample then Identify: A General Framework for Risk Control and Assessment in Multimodal Large Language Models [46.56041622514975]
We introduce TRON, a two-step framework for risk control and assessment.
TRON achieves desired error rates bounded by two user-specified risk levels.
Deduplicated prediction sets maintain adaptiveness while being more efficient and stable for risk assessment under different risk levels.
arXiv Detail & Related papers (2024-10-10T17:50:42Z) - Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - CRiskEval: A Chinese Multi-Level Risk Evaluation Benchmark Dataset for Large Language Models [46.93425758722059]
CRiskEval is a Chinese dataset meticulously designed for gauging the risk proclivities inherent in large language models (LLMs)
We define a new risk taxonomy with 7 types of frontier risks and 4 safety levels, including extremely hazardous,moderately hazardous, neutral and safe.
The dataset consists of 14,888 questions that simulate scenarios related to predefined 7 types of frontier risks.
arXiv Detail & Related papers (2024-06-07T08:52:24Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models [57.10361282229501]
We propose C-RAG, the first framework to certify generation risks for RAG models.
Specifically, we provide conformal risk analysis for RAG models and certify an upper confidence bound of generation risks.
We prove that RAG achieves a lower conformal generation risk than that of a single LLM when the quality of the retrieval model and transformer is non-trivial.
arXiv Detail & Related papers (2024-02-05T16:46:16Z) - Improving the Reliability of Large Language Models by Leveraging
Uncertainty-Aware In-Context Learning [76.98542249776257]
Large-scale language models often face the challenge of "hallucination"
We introduce an uncertainty-aware in-context learning framework to empower the model to enhance or reject its output in response to uncertainty.
arXiv Detail & Related papers (2023-10-07T12:06:53Z) - A Formalism and Approach for Improving Robustness of Large Language
Models Using Risk-Adjusted Confidence Scores [4.043005183192123]
Large Language Models (LLMs) have achieved impressive milestones in natural language processing (NLP)
Despite their impressive performance, the models are known to pose important risks.
We define and formalize two distinct types of risk: decision risk and composite risk.
arXiv Detail & Related papers (2023-10-05T03:20:41Z) - On (assessing) the fairness of risk score models [2.0646127669654826]
Risk models are of interest for a number of reasons, including the fact that they communicate uncertainty about the potential outcomes to users.
We identify the provision of similar value to different groups as a key desideratum for risk score fairness.
We introduce a novel calibration error metric that is less sample size-biased than previously proposed metrics.
arXiv Detail & Related papers (2023-02-17T12:45:51Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.