Transferability of Natural Language Inference to Biomedical Question
Answering
- URL: http://arxiv.org/abs/2007.00217v4
- Date: Wed, 17 Feb 2021 06:40:48 GMT
- Title: Transferability of Natural Language Inference to Biomedical Question
Answering
- Authors: Minbyul Jeong, Mujeen Sung, Gangwoo Kim, Donghyeon Kim, Wonjin Yoon,
Jaehyo Yoo, Jaewoo Kang
- Abstract summary: We focus on applying BioBERT to transfer the knowledge of natural language inference (NLI) to biomedical question answering (QA)
We observe that BioBERT trained on the NLI dataset obtains better performance on Yes/No (+5.59%), Factoid (+0.53%), List type (+13.58%) questions.
We present a sequential transfer learning method that significantly performed well in the 8th BioASQ Challenge (Phase B)
- Score: 17.38537039378825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biomedical question answering (QA) is a challenging task due to the scarcity
of data and the requirement of domain expertise. Pre-trained language models
have been used to address these issues. Recently, learning relationships
between sentence pairs has been proved to improve performance in general QA. In
this paper, we focus on applying BioBERT to transfer the knowledge of natural
language inference (NLI) to biomedical QA. We observe that BioBERT trained on
the NLI dataset obtains better performance on Yes/No (+5.59%), Factoid
(+0.53%), List type (+13.58%) questions compared to performance obtained in a
previous challenge (BioASQ 7B Phase B). We present a sequential transfer
learning method that significantly performed well in the 8th BioASQ Challenge
(Phase B). In sequential transfer learning, the order in which tasks are
fine-tuned is important. We measure an unanswerable rate of the extractive QA
setting when the formats of factoid and list type questions are converted to
the format of the Stanford Question Answering Dataset (SQuAD).
Related papers
- ScholarChemQA: Unveiling the Power of Language Models in Chemical Research Question Answering [54.80411755871931]
Question Answering (QA) effectively evaluates language models' reasoning and knowledge depth.
Chemical QA plays a crucial role in both education and research by effectively translating complex chemical information into readily understandable format.
This dataset reflects typical real-world challenges, including an imbalanced data distribution and a substantial amount of unlabeled data that can be potentially useful.
We introduce a QAMatch model, specifically designed to effectively answer chemical questions by fully leveraging our collected data.
arXiv Detail & Related papers (2024-07-24T01:46:55Z) - Test-Time Self-Adaptive Small Language Models for Question Answering [63.91013329169796]
We show and investigate the capabilities of smaller self-adaptive LMs, only with unlabeled test data.
Our proposed self-adaption strategy demonstrates significant performance improvements on benchmark QA datasets.
arXiv Detail & Related papers (2023-10-20T06:49:32Z) - Query-focused Extractive Summarisation for Biomedical and COVID-19
Complex Question Answering [0.0]
This paper presents Macquarie University's participation in the two most recent BioASQ Synergy Tasks.
We apply query-focused extractive summarisation techniques to generate complex answers to biomedical questions.
For the Synergy task, we selected the candidate sentences following two phases: document retrieval and snippet retrieval.
We observed an improvement of results when the system was trained on the second half of the BioASQ10b training data.
arXiv Detail & Related papers (2022-09-05T07:56:44Z) - Query-Focused Extractive Summarisation for Finding Ideal Answers to
Biomedical and COVID-19 Questions [7.6997148655751895]
Macquarie University participated in the BioASQ Synergy Task and BioASQ9b Phase B.
We used a query-focused summarisation system that was trained with the BioASQ8b training data set.
Considering the poor quality of the documents and snippets retrieved by our system, we observed reasonably good quality in the answers returned.
arXiv Detail & Related papers (2021-08-27T09:19:42Z) - A New Score for Adaptive Tests in Bayesian and Credal Networks [64.80185026979883]
A test is adaptive when its sequence and number of questions is dynamically tuned on the basis of the estimated skills of the taker.
We present an alternative family of scores, based on the mode of the posterior probabilities, and hence easier to explain.
arXiv Detail & Related papers (2021-05-25T20:35:42Z) - TAT-QA: A Question Answering Benchmark on a Hybrid of Tabular and
Textual Content in Finance [71.76018597965378]
We build a new large-scale Question Answering dataset containing both Tabular And Textual data, named TAT-QA.
We propose a novel QA model termed TAGOP, which is capable of reasoning over both tables and text.
arXiv Detail & Related papers (2021-05-17T06:12:06Z) - Sequence Tagging for Biomedical Extractive Question Answering [12.464143741310137]
We investigate the difference of the question distribution across the general and biomedical domains.
We discover biomedical questions are more likely to require list-type answers (multiple answers) than factoid-type answers (single answer)
Our approach can learn to decide the number of answers for a question from training data.
arXiv Detail & Related papers (2021-04-15T15:42:34Z) - Understanding Unnatural Questions Improves Reasoning over Text [54.235828149899625]
Complex question answering (CQA) over raw text is a challenging task.
Learning an effective CQA model requires large amounts of human-annotated data.
We address the challenge of learning a high-quality programmer (parser) by projecting natural human-generated questions into unnatural machine-generated questions.
arXiv Detail & Related papers (2020-10-19T10:22:16Z) - Unsupervised Pre-training for Biomedical Question Answering [32.525495687236194]
We introduce a new pre-training task from unlabeled data designed to reason about biomedical entities in the context.
Our experiments show that pre-training BioBERT on the proposed pre-training task significantly boosts performance and outperforms the previous best model from the 7th BioASQ Task 7b-Phase B challenge.
arXiv Detail & Related papers (2020-09-27T21:07:51Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z) - UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B,
Phase-B [1.976652238476722]
We present our approach for Task-7b, Phase B, Exact Answering Task.
These Question Answering (QA) tasks include Factoid, Yes/No, List Type Question answering.
Our system is based on a contextual word embedding model.
arXiv Detail & Related papers (2020-02-05T20:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.