Granting GPT-4 License and Opportunity: Enhancing Accuracy and Confidence Estimation for Few-Shot Event Detection
- URL: http://arxiv.org/abs/2408.00914v1
- Date: Thu, 1 Aug 2024 21:08:07 GMT
- Title: Granting GPT-4 License and Opportunity: Enhancing Accuracy and Confidence Estimation for Few-Shot Event Detection
- Authors: Steven Fincke, Adrien Bibal, Elizabeth Boschee,
- Abstract summary: Large Language Models (LLMs) have shown enough promise in few-shot learning context to suggest use in the generation of "silver" data.
Confidence estimation is a documented weakness of models such as GPT-4.
The present effort explores methods for effective confidence estimation with GPT-4 with few-shot learning for event detection in the BETTER License as a vehicle.
- Score: 6.718542027371254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) such as GPT-4 have shown enough promise in the few-shot learning context to suggest use in the generation of "silver" data and refinement of new ontologies through iterative application and review. Such workflows become more effective with reliable confidence estimation. Unfortunately, confidence estimation is a documented weakness of models such as GPT-4, and established methods to compensate require significant additional complexity and computation. The present effort explores methods for effective confidence estimation with GPT-4 with few-shot learning for event detection in the BETTER ontology as a vehicle. The key innovation is expanding the prompt and task presented to GPT-4 to provide License to speculate when unsure and Opportunity to quantify and explain its uncertainty (L&O). This approach improves accuracy and provides usable confidence measures (0.759 AUC) with no additional machinery.
Related papers
- Multi-stage Large Language Model Pipelines Can Outperform GPT-4o in Relevance Assessment [6.947361774195549]
We propose a modular classification pipeline that divides the relevance assessment task into multiple stages.
One of our approaches showed an 18.4% Krippendorff's $alpha$ accuracy increase over OpenAI's GPT-4o mini.
arXiv Detail & Related papers (2025-01-24T07:33:39Z) - Large Language Model Confidence Estimation via Black-Box Access [30.490207799344333]
We explore the problem of estimating confidence for responses of large language models (LLMs) with simply black-box or query access to them.
We propose a simple and generalize framework where, we engineer novel features and train a (interpretable) model (viz. logistic regression) on these features to estimate the confidence.
We empirically demonstrate that our simple framework is effective in estimating confidence of Flan-ul2,-13b, Mistral-7b and GPT-4 on four benchmark Q&A tasks as well as Pegasus-large and BART-large on two benchmark summarization tasks.
arXiv Detail & Related papers (2024-06-01T02:08:44Z) - CoTAR: Chain-of-Thought Attribution Reasoning with Multi-level Granularity [8.377398103067508]
We introduce an attribution-oriented Chain-of-Thought reasoning method to enhance the accuracy of attributions.
Evaluations on two context-enhanced question-answering datasets using GPT-4 demonstrate improved accuracy and correctness of attributions.
arXiv Detail & Related papers (2024-04-16T12:37:10Z) - Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression [109.23761449840222]
This study conducts the first, thorough evaluation of leading Large Language Models (LLMs)
We find that quantization is currently a more effective approach than pruning in achieving efficiency and trustworthiness simultaneously.
arXiv Detail & Related papers (2024-03-18T01:38:19Z) - Llamas Know What GPTs Don't Show: Surrogate Models for Confidence
Estimation [70.27452774899189]
Large language models (LLMs) should signal low confidence on examples where they are incorrect, instead of misleading the user.
As of November 2023, state-of-the-art LLMs do not provide access to these probabilities.
Our best method composing linguistic confidences and surrogate model probabilities gives state-of-the-art confidence estimates on all 12 datasets.
arXiv Detail & Related papers (2023-11-15T11:27:44Z) - DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models [92.6951708781736]
This work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5.
We find that GPT models can be easily misled to generate toxic and biased outputs and leak private information.
Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps.
arXiv Detail & Related papers (2023-06-20T17:24:23Z) - GPT-4 Technical Report [116.90398195245983]
GPT-4 is a large-scale, multimodal model which can accept image and text inputs and produce text outputs.
It exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers.
arXiv Detail & Related papers (2023-03-15T17:15:04Z) - Prompting GPT-3 To Be Reliable [117.23966502293796]
This work decomposes reliability into four facets: generalizability, fairness, calibration, and factuality.
We find that GPT-3 outperforms smaller-scale supervised models by large margins on all these facets.
arXiv Detail & Related papers (2022-10-17T14:52:39Z) - An evaluation of word-level confidence estimation for end-to-end
automatic speech recognition [70.61280174637913]
We investigate confidence estimation for end-to-end automatic speech recognition (ASR)
We provide an extensive benchmark of popular confidence methods on four well-known speech datasets.
Our results suggest a strong baseline can be obtained by scaling the logits by a learnt temperature.
arXiv Detail & Related papers (2021-01-14T09:51:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.