Walking a Tightrope -- Evaluating Large Language Models in High-Risk
Domains
- URL: http://arxiv.org/abs/2311.14966v1
- Date: Sat, 25 Nov 2023 08:58:07 GMT
- Title: Walking a Tightrope -- Evaluating Large Language Models in High-Risk
Domains
- Authors: Chia-Chien Hung, Wiem Ben Rim, Lindsay Frost, Lars Bruckner, Carolin
Lawrence
- Abstract summary: High-risk domains pose unique challenges that require language models to provide accurate and safe responses.
Despite the great success of large language models (LLMs), their performance in high-risk domains remains unclear.
- Score: 15.320563604087246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-risk domains pose unique challenges that require language models to
provide accurate and safe responses. Despite the great success of large
language models (LLMs), such as ChatGPT and its variants, their performance in
high-risk domains remains unclear. Our study delves into an in-depth analysis
of the performance of instruction-tuned LLMs, focusing on factual accuracy and
safety adherence. To comprehensively assess the capabilities of LLMs, we
conduct experiments on six NLP datasets including question answering and
summarization tasks within two high-risk domains: legal and medical. Further
qualitative analysis highlights the existing limitations inherent in current
LLMs when evaluating in high-risk domains. This underscores the essential
nature of not only improving LLM capabilities but also prioritizing the
refinement of domain-specific metrics, and embracing a more human-centric
approach to enhance safety and factual reliability. Our findings advance the
field toward the concerns of properly evaluating LLMs in high-risk domains,
aiming to steer the adaptability of LLMs in fulfilling societal obligations and
aligning with forthcoming regulations, such as the EU AI Act.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Current state of LLM Risks and AI Guardrails [0.0]
Large language models (LLMs) have become increasingly sophisticated, leading to widespread deployment in sensitive applications where safety and reliability are paramount.
These risks necessitate the development of "guardrails" to align LLMs with desired behaviors and mitigate potential harm.
This work explores the risks associated with deploying LLMs and evaluates current approaches to implementing guardrails and model alignment techniques.
arXiv Detail & Related papers (2024-06-16T22:04:10Z) - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Benchmarking LLMs via Uncertainty Quantification [91.72588235407379]
The proliferation of open-source Large Language Models (LLMs) has highlighted the urgent need for comprehensive evaluation methods.
We introduce a new benchmarking approach for LLMs that integrates uncertainty quantification.
Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs.
arXiv Detail & Related papers (2024-01-23T14:29:17Z) - A Formalism and Approach for Improving Robustness of Large Language
Models Using Risk-Adjusted Confidence Scores [4.043005183192123]
Large Language Models (LLMs) have achieved impressive milestones in natural language processing (NLP)
Despite their impressive performance, the models are known to pose important risks.
We define and formalize two distinct types of risk: decision risk and composite risk.
arXiv Detail & Related papers (2023-10-05T03:20:41Z) - Safety Assessment of Chinese Large Language Models [51.83369778259149]
Large language models (LLMs) may generate insulting and discriminatory content, reflect incorrect social values, and may be used for malicious purposes.
To promote the deployment of safe, responsible, and ethical AI, we release SafetyPrompts including 100k augmented prompts and responses by LLMs.
arXiv Detail & Related papers (2023-04-20T16:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.