Knowledge-Aware Neural Networks for Medical Forum Question
Classification
- URL: http://arxiv.org/abs/2109.13141v1
- Date: Mon, 27 Sep 2021 15:57:21 GMT
- Title: Knowledge-Aware Neural Networks for Medical Forum Question
Classification
- Authors: Soumyadeep Roy, Sudip Chakraborty, Aishik Mandal, Gunjan Balde,
Prakhar Sharma, Anandhavelu Natarajan, Megha Khosla, Shamik Sural, Niloy
Ganguly
- Abstract summary: We develop a medical knowledge-aware BERT-based model (MedBERT) that gives more weightage to medical concept-bearing words.
We also contribute a multi-label dataset for the Medical Forum Question Classification (MFQC) task.
- Score: 13.22396257705293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online medical forums have become a predominant platform for answering
health-related information needs of consumers. However, with a significant rise
in the number of queries and the limited availability of experts, it is
necessary to automatically classify medical queries based on a consumer's
intention, so that these questions may be directed to the right set of medical
experts. Here, we develop a novel medical knowledge-aware BERT-based model
(MedBERT) that explicitly gives more weightage to medical concept-bearing
words, and utilize domain-specific side information obtained from a popular
medical knowledge base. We also contribute a multi-label dataset for the
Medical Forum Question Classification (MFQC) task. MedBERT achieves
state-of-the-art performance on two benchmark datasets and performs very well
in low resource settings.
Related papers
- FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection [83.54960238236548]
FEDMEKI not only preserves data privacy but also enhances the capability of medical foundation models.
FEDMEKI allows medical foundation models to learn from a broader spectrum of medical knowledge without direct data exposure.
arXiv Detail & Related papers (2024-08-17T15:18:56Z) - COGNET-MD, an evaluation framework and dataset for Large Language Model benchmarks in the medical domain [1.6752458252726457]
Large Language Models (LLMs) constitute a breakthrough state-of-the-art Artificial Intelligence (AI) technology.
We outline Cognitive Network Evaluation Toolkit for Medical Domains (COGNET-MD)
We propose a scoring-framework with increased difficulty to assess the ability of LLMs in interpreting medical text.
arXiv Detail & Related papers (2024-05-17T16:31:56Z) - MedKP: Medical Dialogue with Knowledge Enhancement and Clinical Pathway
Encoding [48.348511646407026]
We introduce the Medical dialogue with Knowledge enhancement and clinical Pathway encoding framework.
The framework integrates an external knowledge enhancement module through a medical knowledge graph and an internal clinical pathway encoding via medical entities and physician actions.
arXiv Detail & Related papers (2024-03-11T10:57:45Z) - Transformer-based classification of user queries for medical consultancy
with respect to expert specialization [4.124390946636936]
This research presents an innovative strategy, utilizing the RuBERT model, for categorizing user inquiries in the field of medical consultation.
We fine-tuned the pre-trained RuBERT model on a varied dataset, which facilitates precise correspondence between queries and particular medical specialisms.
arXiv Detail & Related papers (2023-09-26T04:36:12Z) - Med-Flamingo: a Multimodal Medical Few-shot Learner [58.85676013818811]
We propose Med-Flamingo, a multimodal few-shot learner adapted to the medical domain.
Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks.
We conduct the first human evaluation for generative medical VQA where physicians review the problems and blinded generations in an interactive app.
arXiv Detail & Related papers (2023-07-27T20:36:02Z) - Towards Medical Artificial General Intelligence via Knowledge-Enhanced
Multimodal Pretraining [121.89793208683625]
Medical artificial general intelligence (MAGI) enables one foundation model to solve different medical tasks.
We propose a new paradigm called Medical-knedge-enhanced mulTimOdal pretRaining (MOTOR)
arXiv Detail & Related papers (2023-04-26T01:26:19Z) - MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence
using Federated Evaluation [110.31526448744096]
We argue that unlocking this potential requires a systematic way to measure the performance of medical AI models on large-scale heterogeneous data.
We are building MedPerf, an open framework for benchmarking machine learning in the medical domain.
arXiv Detail & Related papers (2021-09-29T18:09:41Z) - SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical
Visual Question Answering [29.496389523654596]
We present a large bilingual dataset, SLAKE, with comprehensive semantic labels annotated by experienced physicians.
Besides, SLAKE includes richer modalities and covers more human body parts than the currently available dataset.
arXiv Detail & Related papers (2021-02-18T18:44:50Z) - Retrieving and ranking short medical questions with two stages neural
matching model [3.8020157990268206]
80 percent of internet users have asked health-related questions online.
Those representative questions and answers in medical fields are valuable raw data sources for medical data mining.
We propose a novel two-stage framework for the semantic matching of query-level medical questions.
arXiv Detail & Related papers (2020-11-16T07:00:35Z) - MedDG: An Entity-Centric Medical Consultation Dataset for Entity-Aware
Medical Dialogue Generation [86.38736781043109]
We build and release a large-scale high-quality Medical Dialogue dataset related to 12 types of common Gastrointestinal diseases named MedDG.
We propose two kinds of medical dialogue tasks based on MedDG dataset. One is the next entity prediction and the other is the doctor response generation.
Experimental results show that the pre-train language models and other baselines struggle on both tasks with poor performance in our dataset.
arXiv Detail & Related papers (2020-10-15T03:34:33Z) - Knowledge-Empowered Representation Learning for Chinese Medical Reading
Comprehension: Task, Model and Resources [36.960318276653986]
We introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences simultaneously.
We propose the Chinese medical BERT model for the task (CMedBERT), which fuses medical knowledge into pre-trained language models.
Experiments show that CMedBERT consistently outperforms strong baselines by fusing context-aware and knowledge-aware token representations.
arXiv Detail & Related papers (2020-08-24T11:23:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.