Medical Information Retrieval and Interpretation: A Question-Answer
based Interaction Model
- URL: http://arxiv.org/abs/2101.09662v1
- Date: Sun, 24 Jan 2021 07:01:06 GMT
- Title: Medical Information Retrieval and Interpretation: A Question-Answer
based Interaction Model
- Authors: Nilanjan Sinhababu, Rahul Saxena, Monalisa Sarma and Debasis Samanta
- Abstract summary: Internet has become a very powerful platform where diverse medical information are expressed daily.
Current search engines and recommendation systems still lack real time interactions that may provide more precise result generation.
This paper proposes an intelligent and interactive system tied up with the vast medical big data repository on the web.
- Score: 7.990816079551592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Internet has become a very powerful platform where diverse medical
information are expressed daily. Recently, a huge growth is seen in searches
like symptoms, diseases, medicines, and many other health related queries
around the globe. The search engines typically populate the result by using the
single query provided by the user and hence reaching to the final result may
require a lot of manual filtering from the user's end. Current search engines
and recommendation systems still lack real time interactions that may provide
more precise result generation. This paper proposes an intelligent and
interactive system tied up with the vast medical big data repository on the web
and illustrates its potential in finding medical information.
Related papers
- Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - A Knowledge Graph-Based Search Engine for Robustly Finding Doctors and
Locations in the Healthcare Domain [3.268887739788112]
Knowledge graphs (KGs) have emerged as a powerful way to combine the benefits of gleaning insights from semi-structured data.
We present a KG-based search engine architecture for robustly finding doctors and locations in the healthcare domain.
arXiv Detail & Related papers (2023-10-08T18:28:17Z) - Med-Flamingo: a Multimodal Medical Few-shot Learner [58.85676013818811]
We propose Med-Flamingo, a multimodal few-shot learner adapted to the medical domain.
Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks.
We conduct the first human evaluation for generative medical VQA where physicians review the problems and blinded generations in an interactive app.
arXiv Detail & Related papers (2023-07-27T20:36:02Z) - Towards Medical Artificial General Intelligence via Knowledge-Enhanced
Multimodal Pretraining [121.89793208683625]
Medical artificial general intelligence (MAGI) enables one foundation model to solve different medical tasks.
We propose a new paradigm called Medical-knedge-enhanced mulTimOdal pretRaining (MOTOR)
arXiv Detail & Related papers (2023-04-26T01:26:19Z) - ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model
Meta-AI (LLaMA) Using Medical Domain Knowledge [8.584905227066034]
The aim of this research was to create a specialized language model with enhanced accuracy in medical advice.
We achieved this by adapting and refining the large language model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues.
The fine-tuning of the model with real-world patient-doctor interactions significantly improved the model's ability to understand patient needs and provide informed advice.
arXiv Detail & Related papers (2023-03-24T15:29:16Z) - Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision [53.692793122749414]
We introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision.
Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss.
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
arXiv Detail & Related papers (2022-09-30T08:20:32Z) - LingYi: Medical Conversational Question Answering System based on
Multi-modal Knowledge Graphs [35.55690461944328]
This paper presents a medical conversational question answering (CQA) system based on the multi-modal knowledge graph, namely "LingYi"
Our system utilizes automated medical procedures including medical triage, consultation, image-text drug recommendation and record.
arXiv Detail & Related papers (2022-04-20T04:41:26Z) - Multimodal Machine Learning in Precision Health [10.068890037410316]
This review was conducted to summarize this field and identify topics ripe for future research.
We used a combination of content analysis and literature searches to establish search strings and databases of PubMed, Google Scholar, and IEEEXplore from 2011 to 2021.
The most common form of information fusion was early fusion. Notably, there was an improvement in predictive performance performing heterogeneous data fusion.
arXiv Detail & Related papers (2022-04-10T21:56:07Z) - Retrieving and ranking short medical questions with two stages neural
matching model [3.8020157990268206]
80 percent of internet users have asked health-related questions online.
Those representative questions and answers in medical fields are valuable raw data sources for medical data mining.
We propose a novel two-stage framework for the semantic matching of query-level medical questions.
arXiv Detail & Related papers (2020-11-16T07:00:35Z) - MedDG: An Entity-Centric Medical Consultation Dataset for Entity-Aware
Medical Dialogue Generation [86.38736781043109]
We build and release a large-scale high-quality Medical Dialogue dataset related to 12 types of common Gastrointestinal diseases named MedDG.
We propose two kinds of medical dialogue tasks based on MedDG dataset. One is the next entity prediction and the other is the doctor response generation.
Experimental results show that the pre-train language models and other baselines struggle on both tasks with poor performance in our dataset.
arXiv Detail & Related papers (2020-10-15T03:34:33Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.