MatSci-NLP: Evaluating Scientific Language Models on Materials Science
Language Tasks Using Text-to-Schema Modeling
- URL: http://arxiv.org/abs/2305.08264v1
- Date: Sun, 14 May 2023 22:01:24 GMT
- Title: MatSci-NLP: Evaluating Scientific Language Models on Materials Science
Language Tasks Using Text-to-Schema Modeling
- Authors: Yu Song, Santiago Miret, Bang Liu
- Abstract summary: MatSci-NLP is a benchmark for evaluating the performance of natural language processing (NLP) models on materials science text.
We construct the benchmark from publicly available materials science text data to encompass seven different NLP tasks.
We study various BERT-based models pretrained on different scientific text corpora on MatSci-NLP to understand the impact of pretraining strategies on understanding materials science text.
- Score: 13.30198968869312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present MatSci-NLP, a natural language benchmark for evaluating the
performance of natural language processing (NLP) models on materials science
text. We construct the benchmark from publicly available materials science text
data to encompass seven different NLP tasks, including conventional NLP tasks
like named entity recognition and relation classification, as well as NLP tasks
specific to materials science, such as synthesis action retrieval which relates
to creating synthesis procedures for materials. We study various BERT-based
models pretrained on different scientific text corpora on MatSci-NLP to
understand the impact of pretraining strategies on understanding materials
science text. Given the scarcity of high-quality annotated data in the
materials science domain, we perform our fine-tuning experiments with limited
training data to encourage the generalize across MatSci-NLP tasks. Our
experiments in this low-resource training setting show that language models
pretrained on scientific text outperform BERT trained on general text. MatBERT,
a model pretrained specifically on materials science journals, generally
performs best for most tasks. Moreover, we propose a unified text-to-schema for
multitask learning on \benchmark and compare its performance with traditional
fine-tuning methods. In our analysis of different training methods, we find
that our proposed text-to-schema methods inspired by question-answering
consistently outperform single and multitask NLP fine-tuning methods. The code
and datasets are publicly available at
\url{https://github.com/BangLab-UdeM-Mila/NLP4MatSci-ACL23}.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.