Enhancing Medical Dialogue Generation through Knowledge Refinement and Dynamic Prompt Adjustment
- URL: http://arxiv.org/abs/2506.10877v1
- Date: Thu, 12 Jun 2025 16:44:25 GMT
- Title: Enhancing Medical Dialogue Generation through Knowledge Refinement and Dynamic Prompt Adjustment
- Authors: Hongda Sun, Jiaren Peng, Wenzhong Yang, Liang He, Bo Du, Rui Yan,
- Abstract summary: Medical dialogue systems (MDS) have emerged as crucial online platforms for enabling multi-turn, context-aware conversations with patients.<n>We propose MedRef, a novel MDS that incorporates knowledge refining and dynamic prompt adjustment.<n>We show that MedRef outperforms state-of-the-art baselines in both generation quality and medical entity accuracy.
- Score: 43.01880734118588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical dialogue systems (MDS) have emerged as crucial online platforms for enabling multi-turn, context-aware conversations with patients. However, existing MDS often struggle to (1) identify relevant medical knowledge and (2) generate personalized, medically accurate responses. To address these challenges, we propose MedRef, a novel MDS that incorporates knowledge refining and dynamic prompt adjustment. First, we employ a knowledge refining mechanism to filter out irrelevant medical data, improving predictions of critical medical entities in responses. Additionally, we design a comprehensive prompt structure that incorporates historical details and evident details. To enable real-time adaptability to diverse patient conditions, we implement two key modules, Triplet Filter and Demo Selector, providing appropriate knowledge and demonstrations equipped in the system prompt. Extensive experiments on MedDG and KaMed benchmarks show that MedRef outperforms state-of-the-art baselines in both generation quality and medical entity accuracy, underscoring its effectiveness and reliability for real-world healthcare applications.
Related papers
- MedMKEB: A Comprehensive Knowledge Editing Benchmark for Medical Multimodal Large Language Models [5.253788190589279]
We present MedMKEB, the first comprehensive benchmark designed to evaluate the reliability, generality, locality, portability, and robustness of knowledge editing.<n> MedMKEB is built on a high-quality medical visual question-answering dataset and enriched with carefully constructed editing tasks.<n>We incorporate human expert validation to ensure the accuracy and reliability of the benchmark.
arXiv Detail & Related papers (2025-08-07T07:09:26Z) - Lingshu: A Generalist Foundation Model for Unified Multimodal Medical Understanding and Reasoning [57.873833577058]
We build a multimodal dataset enriched with extensive medical knowledge.<n>We then introduce our medical-specialized MLLM: Lingshu.<n>Lingshu undergoes multi-stage training to embed medical expertise and enhance its task-solving capabilities.
arXiv Detail & Related papers (2025-06-08T08:47:30Z) - DualPrompt-MedCap: A Dual-Prompt Enhanced Approach for Medical Image Captioning [5.456249017636404]
We present DualPrompt-MedCap, a novel dual-prompt enhancement framework that augments Large Vision-Language Models (LVLMs)<n>A modality-aware prompt derived from a semi-supervised classification model pretrained on medical question-answer pairs, and a question-guided prompt leveraging biomedical language model embeddings.<n>Our method enables the generation of clinically accurate reports that can serve as medical experts' prior knowledge and automatic annotations for downstream vision-language tasks.
arXiv Detail & Related papers (2025-04-13T14:31:55Z) - Medchain: Bridging the Gap Between LLM Agents and Clinical Practice through Interactive Sequential Benchmarking [58.25862290294702]
We present MedChain, a dataset of 12,163 clinical cases that covers five key stages of clinical workflow.<n>We also propose MedChain-Agent, an AI system that integrates a feedback mechanism and a MCase-RAG module to learn from previous cases and adapt its responses.
arXiv Detail & Related papers (2024-12-02T15:25:02Z) - MedAide: Information Fusion and Anatomy of Medical Intents via LLM-based Agent Collaboration [19.951977369610983]
MedAide is a medical multi-agent collaboration framework designed to enable intent-aware information fusion and coordinated reasoning.<n>We introduce a regularization-guided module that combines syntactic constraints with retrieval augmented generation to decompose complex queries.<n>We also propose a dynamic intent prototype matching module to achieve adaptive recognition and updating of the agent's intent.
arXiv Detail & Related papers (2024-10-16T13:10:27Z) - MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning [36.400896909161006]
We develop systems that proactively ask questions to gather more information and respond reliably.
We introduce a benchmark - MediQ - to evaluate question-asking ability in LLMs.
arXiv Detail & Related papers (2024-06-03T01:32:52Z) - MedKP: Medical Dialogue with Knowledge Enhancement and Clinical Pathway
Encoding [48.348511646407026]
We introduce the Medical dialogue with Knowledge enhancement and clinical Pathway encoding framework.
The framework integrates an external knowledge enhancement module through a medical knowledge graph and an internal clinical pathway encoding via medical entities and physician actions.
arXiv Detail & Related papers (2024-03-11T10:57:45Z) - PlugMed: Improving Specificity in Patient-Centered Medical Dialogue
Generation using In-Context Learning [20.437165038293426]
The patient-centered medical dialogue systems strive to offer diagnostic interpretation services to users who are less knowledgeable about medical knowledge.
It is difficult for the large language models (LLMs) to guarantee the specificity of responses in spite of its promising performance.
Inspired by in-context learning, we propose PlugMed, a Plug-and-Play Medical Dialogue System.
arXiv Detail & Related papers (2023-05-19T08:18:24Z) - Semi-Supervised Variational Reasoning for Medical Dialogue Generation [70.838542865384]
Two key characteristics are relevant for medical dialogue generation: patient states and physician actions.
We propose an end-to-end variational reasoning approach to medical dialogue generation.
A physician policy network composed of an action-classifier and two reasoning detectors is proposed for augmented reasoning ability.
arXiv Detail & Related papers (2021-05-13T04:14:35Z) - MedDG: An Entity-Centric Medical Consultation Dataset for Entity-Aware
Medical Dialogue Generation [86.38736781043109]
We build and release a large-scale high-quality Medical Dialogue dataset related to 12 types of common Gastrointestinal diseases named MedDG.
We propose two kinds of medical dialogue tasks based on MedDG dataset. One is the next entity prediction and the other is the doctor response generation.
Experimental results show that the pre-train language models and other baselines struggle on both tasks with poor performance in our dataset.
arXiv Detail & Related papers (2020-10-15T03:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.