Revealing Patient-Reported Experiences in Healthcare from Social Media
using the DAPMAV Framework
- URL: http://arxiv.org/abs/2210.04232v2
- Date: Mon, 11 Dec 2023 03:52:57 GMT
- Title: Revealing Patient-Reported Experiences in Healthcare from Social Media
using the DAPMAV Framework
- Authors: Curtis Murray, Lewis Mitchell, Jonathan Tuke, Mark Mackay
- Abstract summary: We introduce the Design-Acquire-Process-Model-Analyse-Visualise (DAPMAV) framework to provide an overview of techniques and an approach to capture patient-reported experiences from social media data.
We apply this framework in a case study on prostate cancer data from /r/ProstateCancer.
- Score: 0.04096453902709291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding patient experience in healthcare is increasingly important and
desired by medical professionals in a patient-centered care approach.
Healthcare discourse on social media presents an opportunity to gain a unique
perspective on patient-reported experiences, complementing traditional survey
data. These social media reports often appear as first-hand accounts of
patients' journeys through the healthcare system, whose details extend beyond
the confines of structured surveys and at a far larger scale than focus groups.
However, in contrast with the vast presence of patient-experience data on
social media and the potential benefits the data offers, it attracts
comparatively little research attention due to the technical proficiency
required for text analysis. In this paper, we introduce the
Design-Acquire-Process-Model-Analyse-Visualise (DAPMAV) framework to provide an
overview of techniques and an approach to capture patient-reported experiences
from social media data. We apply this framework in a case study on prostate
cancer data from /r/ProstateCancer, demonstrate the framework's value in
capturing specific aspects of patient concern (such as sexual dysfunction),
provide an overview of the discourse, and show narrative and emotional
progression through these stories. We anticipate this framework to apply to a
wide variety of areas in healthcare, including capturing and differentiating
experiences across minority groups, geographic boundaries, and types of
illnesses.
Related papers
- ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic Features [54.37042005469384]
We announce MVKL, the first multimodal mammography dataset encompassing multi-view images, detailed manifestations and reports.
Based on this dataset, we focus on the challanging task of unsupervised pretraining.
We propose ViKL, a framework that synergizes Visual, Knowledge, and Linguistic features.
arXiv Detail & Related papers (2024-09-24T05:01:23Z) - A Comparative Study on Patient Language across Therapeutic Domains for Effective Patient Voice Classification in Online Health Discussions [0.48124799513933847]
In this study, we analyse the importance of linguistic characteristics in accurately classifying patient voices.
We fine-tuned a pre-trained Language Model on the combined datasets with similar linguistic patterns, resulting in a highly accurate automatic patient voice classification.
Being the pioneering study on the topic, our focus on extracting authentic patient experiences from social media stands as a crucial step towards advancing healthcare standards.
arXiv Detail & Related papers (2024-07-23T15:51:46Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Are Generative AI systems Capable of Supporting Information Needs of
Patients? [4.485098382568721]
We investigate whether and how generative visual question answering systems can responsibly support patient information needs in the context of radiology imaging data.
We conducted a formative need-finding study in which participants discussed chest computed tomography (CT) scans and associated radiology reports of a fictitious close relative with a cardiothoracic radiologist.
Using thematic analysis of the conversation between participants and medical experts, we identified commonly occurring themes across interactions.
We evaluate two state-of-the-art generative visual language models against the radiologist's responses.
arXiv Detail & Related papers (2024-01-31T23:24:37Z) - Yes, this is what I was looking for! Towards Multi-modal Medical
Consultation Concern Summary Generation [46.42604861624895]
We propose a new task of multi-modal medical concern summary generation.
Nonverbal cues, such as patients' gestures and facial expressions, aid in accurately identifying patients' concerns.
We construct the first multi-modal medical concern summary generation corpus.
arXiv Detail & Related papers (2024-01-10T12:56:47Z) - Probabilistic emotion and sentiment modelling of patient-reported
experiences [0.04096453902709291]
This study introduces a novel methodology for modelling patient emotions from online patient experience narratives.
We employ metadata network topic modelling to analyse patient-reported experiences from Care Opinion.
We develop a probabilistic, context-specific emotion recommender system capable of predicting both multilabel emotions and binary sentiments.
arXiv Detail & Related papers (2024-01-09T05:39:20Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - MedNgage: A Dataset for Understanding Engagement in Patient-Nurse
Conversations [4.847266237348932]
Patients who effectively manage their symptoms often demonstrate higher levels of engagement in conversations and interventions with healthcare practitioners.
It is crucial for AI systems to understand the engagement in natural conversations between patients and practitioners to better contribute toward patient care.
We present a novel dataset (MedNgage) which consists of patient-nurse conversations about cancer symptom management.
arXiv Detail & Related papers (2023-05-31T16:06:07Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.