Zero-Shot Medical Information Retrieval via Knowledge Graph Embedding
- URL: http://arxiv.org/abs/2310.20588v1
- Date: Tue, 31 Oct 2023 16:26:33 GMT
- Title: Zero-Shot Medical Information Retrieval via Knowledge Graph Embedding
- Authors: Yuqi Wang, Zeqiang Wang, Wei Wang, Qi Chen, Kaizhu Huang, Anh Nguyen,
and Suparna De
- Abstract summary: This paper introduces MedFusionRank, a novel approach to zero-shot medical information retrieval (MIR)
The proposed approach leverages a pre-trained BERT-style model to extract compact yet informative keywords.
These keywords are then enriched with domain knowledge by linking them to conceptual entities within a medical knowledge graph.
- Score: 27.14794371879541
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of the Internet of Things (IoT), the retrieval of relevant medical
information has become essential for efficient clinical decision-making. This
paper introduces MedFusionRank, a novel approach to zero-shot medical
information retrieval (MIR) that combines the strengths of pre-trained language
models and statistical methods while addressing their limitations. The proposed
approach leverages a pre-trained BERT-style model to extract compact yet
informative keywords. These keywords are then enriched with domain knowledge by
linking them to conceptual entities within a medical knowledge graph.
Experimental evaluations on medical datasets demonstrate MedFusion Rank's
superior performance over existing methods, with promising results with a
variety of evaluation metrics. MedFusionRank demonstrates efficacy in
retrieving relevant information, even from short or single-term queries.
Related papers
- AutoMIR: Effective Zero-Shot Medical Information Retrieval without Relevance Labels [19.90354530235266]
We introduce a novel approach called Self-Learning Hypothetical Document Embeddings (SL-HyDE) to tackle this issue.
SL-HyDE leverages large language models (LLMs) as generators to generate hypothetical documents based on a given query.
We present the Chinese Medical Information Retrieval Benchmark (CMIRB), a comprehensive evaluation framework grounded in real-world medical scenarios.
arXiv Detail & Related papers (2024-10-26T02:53:20Z) - uMedSum: A Unified Framework for Advancing Medical Abstractive Summarization [23.173826980480936]
Current methods often sacrifice key information for faithfulness or introduce confabulations when prioritizing informativeness.
This paper presents a benchmark of six advanced abstractive summarization methods across three diverse datasets using five standardized metrics.
We propose uMedSum, a modular hybrid summarization framework that introduces novel approaches for sequential confabulation removal followed by key missing information addition.
arXiv Detail & Related papers (2024-08-22T03:08:49Z) - STLLaVA-Med: Self-Training Large Language and Vision Assistant for Medical Question-Answering [58.79671189792399]
STLLaVA-Med is designed to train a policy model capable of auto-generating medical visual instruction data.
We validate the efficacy and data efficiency of STLLaVA-Med across three major medical Visual Question Answering (VQA) benchmarks.
arXiv Detail & Related papers (2024-06-28T15:01:23Z) - MedKP: Medical Dialogue with Knowledge Enhancement and Clinical Pathway
Encoding [48.348511646407026]
We introduce the Medical dialogue with Knowledge enhancement and clinical Pathway encoding framework.
The framework integrates an external knowledge enhancement module through a medical knowledge graph and an internal clinical pathway encoding via medical entities and physician actions.
arXiv Detail & Related papers (2024-03-11T10:57:45Z) - MKA: A Scalable Medical Knowledge Assisted Mechanism for Generative
Models on Medical Conversation Tasks [3.9571320117430866]
The mechanism aims to assist general neural generative models to achieve better performance on the medical conversation task.
The medical-specific knowledge graph is designed within the mechanism, which contains 6 types of medical-related information.
The evaluation results demonstrate that models combined with our mechanism outperform original methods in multiple automatic evaluation metrics.
arXiv Detail & Related papers (2023-12-05T04:55:54Z) - How to Leverage Multimodal EHR Data for Better Medical Predictions? [13.401754962583771]
The complexity of electronic health records ( EHR) data is a challenge for the application of deep learning.
In this paper, we first extract the accompanying clinical notes from EHR and propose a method to integrate these data.
The results on two medical prediction tasks show that our fused model with different data outperforms the state-of-the-art method.
arXiv Detail & Related papers (2021-10-29T13:26:05Z) - MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence
using Federated Evaluation [110.31526448744096]
We argue that unlocking this potential requires a systematic way to measure the performance of medical AI models on large-scale heterogeneous data.
We are building MedPerf, an open framework for benchmarking machine learning in the medical domain.
arXiv Detail & Related papers (2021-09-29T18:09:41Z) - Word-level Text Highlighting of Medical Texts forTelehealth Services [0.0]
This paper aims to show how different text highlighting techniques can capture relevant medical context.
Three different word-level text highlighting methodologies are implemented and evaluated.
The results of our experiments show that the neural network approach is successful in highlighting medically-relevant terms.
arXiv Detail & Related papers (2021-05-21T15:13:54Z) - An Analysis of a BERT Deep Learning Strategy on a Technology Assisted
Review Task [91.3755431537592]
Document screening is a central task within Evidenced Based Medicine.
I propose a DL document classification approach with BERT or PubMedBERT embeddings and a DL similarity search path.
I test and evaluate the retrieval effectiveness of my DL strategy on the 2017 and 2018 CLEF eHealth collections.
arXiv Detail & Related papers (2021-04-16T19:45:27Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.