NuclearQA: A Human-Made Benchmark for Language Models for the Nuclear
Domain
- URL: http://arxiv.org/abs/2310.10920v1
- Date: Tue, 17 Oct 2023 01:27:20 GMT
- Title: NuclearQA: A Human-Made Benchmark for Language Models for the Nuclear
Domain
- Authors: Anurag Acharya, Sai Munikoti, Aaron Hellinger, Sara Smith, Sridevi
Wagle, and Sameera Horawalavithana
- Abstract summary: NuclearQA is a human-made benchmark of 100 questions to evaluate language models in the nuclear domain.
We show how the mix of several types of questions makes our benchmark uniquely capable of evaluating models in the nuclear domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As LLMs have become increasingly popular, they have been used in almost every
field. But as the application for LLMs expands from generic fields to narrow,
focused science domains, there exists an ever-increasing gap in ways to
evaluate their efficacy in those fields. For the benchmarks that do exist, a
lot of them focus on questions that don't require proper understanding of the
subject in question. In this paper, we present NuclearQA, a human-made
benchmark of 100 questions to evaluate language models in the nuclear domain,
consisting of a varying collection of questions that have been specifically
designed by experts to test the abilities of language models. We detail our
approach and show how the mix of several types of questions makes our benchmark
uniquely capable of evaluating models in the nuclear domain. We also present
our own evaluation metric for assessing LLM's performances due to the
limitations of existing ones. Our experiments on state-of-the-art models
suggest that even the best LLMs perform less than satisfactorily on our
benchmark, demonstrating the scientific knowledge gap of existing LLMs.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.