Are Generative AI systems Capable of Supporting Information Needs of
Patients?
- URL: http://arxiv.org/abs/2402.00234v1
- Date: Wed, 31 Jan 2024 23:24:37 GMT
- Title: Are Generative AI systems Capable of Supporting Information Needs of
Patients?
- Authors: Shreya Rajagopal, Subhashis Hazarika, Sookyung Kim, Yan-ming Chiou,
Jae Ho Sohn, Hari Subramonyam, Shiwali Mohan
- Abstract summary: We investigate whether and how generative visual question answering systems can responsibly support patient information needs in the context of radiology imaging data.
We conducted a formative need-finding study in which participants discussed chest computed tomography (CT) scans and associated radiology reports of a fictitious close relative with a cardiothoracic radiologist.
Using thematic analysis of the conversation between participants and medical experts, we identified commonly occurring themes across interactions.
We evaluate two state-of-the-art generative visual language models against the radiologist's responses.
- Score: 4.485098382568721
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Patients managing a complex illness such as cancer face a complex information
challenge where they not only must learn about their illness but also how to
manage it. Close interaction with healthcare experts (radiologists,
oncologists) can improve patient learning and thereby, their disease outcome.
However, this approach is resource intensive and takes expert time away from
other critical tasks. Given the recent advancements in Generative AI models
aimed at improving the healthcare system, our work investigates whether and how
generative visual question answering systems can responsibly support patient
information needs in the context of radiology imaging data. We conducted a
formative need-finding study in which participants discussed chest computed
tomography (CT) scans and associated radiology reports of a fictitious close
relative with a cardiothoracic radiologist. Using thematic analysis of the
conversation between participants and medical experts, we identified commonly
occurring themes across interactions, including clarifying medical terminology,
locating the problems mentioned in the report in the scanned image,
understanding disease prognosis, discussing the next diagnostic steps, and
comparing treatment options. Based on these themes, we evaluated two
state-of-the-art generative visual language models against the radiologist's
responses. Our results reveal variability in the quality of responses generated
by the models across various themes. We highlight the importance of
patient-facing generative AI systems to accommodate a diverse range of
conversational themes, catering to the real-world informational needs of
patients.
Related papers
- Designing a Robust Radiology Report Generation System [1.0878040851637998]
This paper outlines the design of a robust radiology report generation system by integrating different modules and highlighting best practices.
We believe that these best practices could improve automatic radiology report generation, augment radiologists in decision making, and expedite diagnostic workflow.
arXiv Detail & Related papers (2024-11-02T06:38:04Z) - A Two-Stage Proactive Dialogue Generator for Efficient Clinical Information Collection Using Large Language Model [0.6926413609535759]
We propose a diagnostic dialogue system to automate the patient information collection procedure.
By exploiting medical history and conversation logic, our conversation agents can pose multi-round clinical queries.
Our experimental results on a real-world medical conversation dataset show that our model can generate clinical queries that mimic the conversation style of real doctors.
arXiv Detail & Related papers (2024-10-02T19:32:11Z) - ReXplain: Translating Radiology into Patient-Friendly Video Reports [5.787653511498558]
ReXplain is an AI-driven system that generates patient-friendly video reports for radiology findings.
Our proof-of-concept study with five board-certified radiologists indicates that ReXplain could accurately deliver radiological information.
arXiv Detail & Related papers (2024-10-01T06:41:18Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Towards Knowledge-Infused Automated Disease Diagnosis Assistant [14.150224660741939]
We build a diagnosis assistant to assist doctors, which identifies diseases based on patient-doctor interaction.
We propose a two-channel, discourse-aware disease diagnosis model (KI-DDI), where the first channel encodes patient-doctor communication.
In the next stage, the conversation and knowledge graph embeddings are infused together and fed to a deep neural network for disease identification.
arXiv Detail & Related papers (2024-05-18T05:18:50Z) - Knowledge-Informed Machine Learning for Cancer Diagnosis and Prognosis:
A review [2.2268038840298714]
We review the state-of-the-art machine learning studies that adopted the fusion of biomedical knowledge and data.
We provide an overview of diverse forms of knowledge representation and current strategies of knowledge integration into machine learning pipelines.
arXiv Detail & Related papers (2024-01-12T07:01:36Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models [60.437091462613544]
We introduce XrayGPT, a novel conversational medical vision-language model.
It can analyze and answer open-ended questions about chest radiographs.
We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - MedDG: An Entity-Centric Medical Consultation Dataset for Entity-Aware
Medical Dialogue Generation [86.38736781043109]
We build and release a large-scale high-quality Medical Dialogue dataset related to 12 types of common Gastrointestinal diseases named MedDG.
We propose two kinds of medical dialogue tasks based on MedDG dataset. One is the next entity prediction and the other is the doctor response generation.
Experimental results show that the pre-train language models and other baselines struggle on both tasks with poor performance in our dataset.
arXiv Detail & Related papers (2020-10-15T03:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.