Detecting Visual Cues in the Intensive Care Unit and Association with Patient Clinical Status
- URL: http://arxiv.org/abs/2311.00565v2
- Date: Fri, 12 Jul 2024 15:05:24 GMT
- Title: Detecting Visual Cues in the Intensive Care Unit and Association with Patient Clinical Status
- Authors: Subhash Nerella, Ziyuan Guan, Andrea Davidson, Yuanfang Ren, Tezcan Baslanti, Brooke Armfield, Patrick Tighe, Azra Bihorac, Parisa Rashidi,
- Abstract summary: Existing patient assessments in the ICU are mostly sporadic and administered manually.
We developed a new "masked loss computation" technique that addresses the data imbalance problem.
We performed AU inference on 634,054 frames to evaluate the association between facial AUs and clinically important patient conditions.
- Score: 0.9867627975175174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intensive Care Units (ICU) provide close supervision and continuous care to patients with life-threatening conditions. However, continuous patient assessment in the ICU is still limited due to time constraints and the workload on healthcare providers. Existing patient assessments in the ICU such as pain or mobility assessment are mostly sporadic and administered manually, thus introducing the potential for human errors. Developing Artificial intelligence (AI) tools that can augment human assessments in the ICU can be beneficial for providing more objective and granular monitoring capabilities. For example, capturing the variations in a patient's facial cues related to pain or agitation can help in adjusting pain-related medications or detecting agitation-inducing conditions such as delirium. Additionally, subtle changes in visual cues during or prior to adverse clinical events could potentially aid in continuous patient monitoring when combined with high-resolution physiological signals and Electronic Health Record (EHR) data. In this paper, we examined the association between visual cues and patient condition including acuity status, acute brain dysfunction, and pain. We leveraged our AU-ICU dataset with 107,064 frames collected in the ICU annotated with facial action units (AUs) labels by trained annotators. We developed a new "masked loss computation" technique that addresses the data imbalance problem by maximizing data resource utilization. We trained the model using our AU-ICU dataset in conjunction with three external datasets to detect 18 AUs. The SWIN Transformer model achieved 0.57 mean F1-score and 0.89 mean accuracy on the test set. Additionally, we performed AU inference on 634,054 frames to evaluate the association between facial AUs and clinically important patient conditions such as acuity status, acute brain dysfunction, and pain.
Related papers
- Leveraging Computer Vision in the Intensive Care Unit (ICU) for Examining Visitation and Mobility [12.347067736902094]
We leverage a state-of-the-art noninvasive computer vision system based on depth imaging to characterize ICU visitations and patients' mobility.
We found an association between deteriorating patient acuity and the incidence of delirium with increased visitations.
Our findings highlight the feasibility and potential of using noninvasive autonomous systems to monitor ICU patients.
arXiv Detail & Related papers (2024-03-10T21:43:47Z) - Evaluating the Fairness of the MIMIC-IV Dataset and a Baseline
Algorithm: Application to the ICU Length of Stay Prediction [65.268245109828]
This paper uses the MIMIC-IV dataset to examine the fairness and bias in an XGBoost binary classification model predicting the ICU length of stay.
The research reveals class imbalances in the dataset across demographic attributes and employs data preprocessing and feature extraction.
The paper concludes with recommendations for fairness-aware machine learning techniques for mitigating biases and the need for collaborative efforts among healthcare professionals and data scientists.
arXiv Detail & Related papers (2023-12-31T16:01:48Z) - The Potential of Wearable Sensors for Assessing Patient Acuity in
Intensive Care Unit (ICU) [12.359907390320453]
Acuity assessments are vital in critical care settings to provide timely interventions and fair resource allocation.
Traditional acuity scores do not incorporate granular information such as patients' mobility level, which can indicate recovery or deterioration in the ICU.
In this study, we evaluated the impact of integrating mobility data collected from wrist-worn accelerometers with clinical data obtained from EHR for developing an AI-driven acuity assessment score.
arXiv Detail & Related papers (2023-11-03T21:52:05Z) - AI-Enhanced Intensive Care Unit: Revolutionizing Patient Care with Pervasive Sensing [2.7503982558916906]
The intensive care unit (ICU) is a specialized hospital space where critically ill patients receive intensive care and monitoring.
Comprehensive monitoring is imperative in assessing patients conditions, in particular acuity, and ultimately the quality of care.
Currently, visual assessments for acuity, including fine details such as facial expressions, posture, and mobility, are sporadically captured, or not captured at all.
arXiv Detail & Related papers (2023-03-11T00:25:55Z) - End-to-End Machine Learning Framework for Facial AU Detection in
Intensive Care Units [3.8168092489216385]
Pain is a common occurrence among patients admitted to Intensive Care Units.
Current manual observation-based pain assessment tools are limited by the frequency of pain observations administered and are subjective to the observer.
We present our Pain-ICU dataset, the largest dataset available targeting facial behavior analysis in the dynamic ICU environment.
arXiv Detail & Related papers (2022-11-12T04:43:16Z) - Predicting Patient Readmission Risk from Medical Text via Knowledge
Graph Enhanced Multiview Graph Convolution [67.72545656557858]
We propose a new method that uses medical text of Electronic Health Records for prediction.
We represent discharge summaries of patients with multiview graphs enhanced by an external knowledge graph.
Experimental results prove the effectiveness of our method, yielding state-of-the-art performance.
arXiv Detail & Related papers (2021-12-19T01:45:57Z) - Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in
Artificial Intelligence [79.038671794961]
We launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution.
Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK.
arXiv Detail & Related papers (2021-11-18T00:43:41Z) - Integrative Analysis for COVID-19 Patient Outcome Prediction [53.11258640541513]
We combine radiomics of lung opacities and non-imaging features from demographic data, vital signs, and laboratory findings to predict need for intensive care unit admission.
Our methods may also be applied to other lung diseases including but not limited to community acquired pneumonia.
arXiv Detail & Related papers (2020-07-20T19:08:50Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Facial Action Unit Detection on ICU Data for Pain Assessment [1.8352113484137622]
Current day pain assessment methods rely on patient self-report or by an observer like the Intensive Care Unit (ICU) nurses.
In this study, we show the need for automated pain assessment system which is trained on real-world ICU data for clinically acceptable pain assessment system.
arXiv Detail & Related papers (2020-04-24T17:12:56Z) - Estimating Counterfactual Treatment Outcomes over Time Through
Adversarially Balanced Representations [114.16762407465427]
We introduce the Counterfactual Recurrent Network (CRN) to estimate treatment effects over time.
CRN uses domain adversarial training to build balancing representations of the patient history.
We show how our model achieves lower error in estimating counterfactuals and in choosing the correct treatment and timing of treatment.
arXiv Detail & Related papers (2020-02-10T20:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.