LingYi: Medical Conversational Question Answering System based on
Multi-modal Knowledge Graphs
- URL: http://arxiv.org/abs/2204.09220v1
- Date: Wed, 20 Apr 2022 04:41:26 GMT
- Title: LingYi: Medical Conversational Question Answering System based on
Multi-modal Knowledge Graphs
- Authors: Fei Xia, Bin Li, Yixuan Weng, Shizhu He, Kang Liu, Bin Sun, Shutao Li
and Jun Zhao
- Abstract summary: This paper presents a medical conversational question answering (CQA) system based on the multi-modal knowledge graph, namely "LingYi"
Our system utilizes automated medical procedures including medical triage, consultation, image-text drug recommendation and record.
- Score: 35.55690461944328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The medical conversational system can relieve the burden of doctors and
improve the efficiency of healthcare, especially during the pandemic. This
paper presents a medical conversational question answering (CQA) system based
on the multi-modal knowledge graph, namely "LingYi", which is designed as a
pipeline framework to maintain high flexibility. Our system utilizes automated
medical procedures including medical triage, consultation, image-text drug
recommendation and record. To conduct knowledge-grounded dialogues with
patients, we first construct a Chinese Medical Multi-Modal Knowledge Graph
(CM3KG) and collect a large-scale Chinese Medical CQA (CMCQA) dataset. Compared
with the other existing medical question-answering systems, our system adopts
several state-of-the-art technologies including medical entity disambiguation
and medical dialogue generation, which is more friendly to provide medical
services to patients. In addition, we have open-sourced our codes which contain
back-end models and front-end web pages at https://github.com/WENGSYX/LingYi.
The datasets including CM3KG at https://github.com/WENGSYX/CM3KG and CMCQA at
https://github.com/WENGSYX/CMCQA are also released to further promote future
research.
Related papers
- MediFact at MEDIQA-M3G 2024: Medical Question Answering in Dermatology with Multimodal Learning [0.0]
This paper addresses the limitations of traditional methods by proposing a weakly supervised learning approach for open-ended medical question-answering (QA)
Our system leverages readily available MEDIQA-M3G images via a VGG16-CNN-SVM model, enabling multilingual learning of informative skin condition representations.
This work advances medical QA research, paving the way for clinical decision support systems and ultimately improving healthcare delivery.
arXiv Detail & Related papers (2024-04-27T20:03:47Z) - MedKP: Medical Dialogue with Knowledge Enhancement and Clinical Pathway
Encoding [48.348511646407026]
We introduce the Medical dialogue with Knowledge enhancement and clinical Pathway encoding framework.
The framework integrates an external knowledge enhancement module through a medical knowledge graph and an internal clinical pathway encoding via medical entities and physician actions.
arXiv Detail & Related papers (2024-03-11T10:57:45Z) - AI Hospital: Benchmarking Large Language Models in a Multi-agent Medical Interaction Simulator [69.51568871044454]
We introduce textbfAI Hospital, a framework simulating dynamic medical interactions between emphDoctor as player and NPCs.
This setup allows for realistic assessments of LLMs in clinical scenarios.
We develop the Multi-View Medical Evaluation benchmark, utilizing high-quality Chinese medical records and NPCs.
arXiv Detail & Related papers (2024-02-15T06:46:48Z) - MedSumm: A Multimodal Approach to Summarizing Code-Mixed Hindi-English
Clinical Queries [16.101969130235055]
We introduce the Multimodal Medical Codemixed Question Summarization MMCQS dataset.
This dataset combines Hindi-English codemixed medical queries with visual aids.
Our dataset, code, and pre-trained models will be made publicly available.
arXiv Detail & Related papers (2024-01-03T07:58:25Z) - MedChatZH: a Better Medical Adviser Learns from Better Instructions [11.08819869122466]
We introduce MedChatZH, a dialogue model designed specifically for traditional Chinese medical QA.
Our model is pre-trained on Chinese traditional medical books and fine-tuned with a carefully curated medical instruction dataset.
It outperforms several solid baselines on a real-world medical dialogue dataset.
arXiv Detail & Related papers (2023-09-03T08:08:15Z) - Med-Flamingo: a Multimodal Medical Few-shot Learner [58.85676013818811]
We propose Med-Flamingo, a multimodal few-shot learner adapted to the medical domain.
Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks.
We conduct the first human evaluation for generative medical VQA where physicians review the problems and blinded generations in an interactive app.
arXiv Detail & Related papers (2023-07-27T20:36:02Z) - PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering [56.25766322554655]
Medical Visual Question Answering (MedVQA) presents a significant opportunity to enhance diagnostic accuracy and healthcare delivery.
We propose a generative-based model for medical visual understanding by aligning visual information from a pre-trained vision encoder with a large language model.
We train the proposed model on PMC-VQA and then fine-tune it on multiple public benchmarks, e.g., VQA-RAD, SLAKE, and Image-Clef 2019.
arXiv Detail & Related papers (2023-05-17T17:50:16Z) - CDialog: A Multi-turn Covid-19 Conversation Dataset for Entity-Aware
Dialog Generation [18.047064216849204]
We release a high-quality multi-turn Medical Dialog dataset relating to Covid-19 disease named CDialog.
We propose a novel neural medical dialog system based on the CDialog dataset to advance future research on developing automated medical dialog systems.
arXiv Detail & Related papers (2022-11-16T11:07:34Z) - MedDG: An Entity-Centric Medical Consultation Dataset for Entity-Aware
Medical Dialogue Generation [86.38736781043109]
We build and release a large-scale high-quality Medical Dialogue dataset related to 12 types of common Gastrointestinal diseases named MedDG.
We propose two kinds of medical dialogue tasks based on MedDG dataset. One is the next entity prediction and the other is the doctor response generation.
Experimental results show that the pre-train language models and other baselines struggle on both tasks with poor performance in our dataset.
arXiv Detail & Related papers (2020-10-15T03:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.