Learning predictive checklists from continuous medical data
- URL: http://arxiv.org/abs/2211.07076v1
- Date: Mon, 14 Nov 2022 02:51:04 GMT
- Title: Learning predictive checklists from continuous medical data
- Authors: Yukti Makhija and Edward De Brouwer and Rahul G. Krishnan
- Abstract summary: Checklists are highly popular in daily clinical practice due to their combined effectiveness and great interpretability.
Recent works have taken a step in that direction by learning predictive checklists from categorical data.
We show that this extension outperforms a range of explainable machine learning baselines on the prediction of sepsis.
- Score: 5.37133760455631
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Checklists, while being only recently introduced in the medical domain, have
become highly popular in daily clinical practice due to their combined
effectiveness and great interpretability. Checklists are usually designed by
expert clinicians that manually collect and analyze available evidence.
However, the increasing quantity of available medical data is calling for a
partially automated checklist design. Recent works have taken a step in that
direction by learning predictive checklists from categorical data. In this
work, we propose to extend this approach to accomodate learning checklists from
continuous medical data using mixed-integer programming approach. We show that
this extension outperforms a range of explainable machine learning baselines on
the prediction of sepsis from intensive care clinical trajectories.
Related papers
- Learning Predictive Checklists with Probabilistic Logic Programming [17.360186431981592]
We propose a novel method for learning predictive checklists from diverse data modalities, such as images and time series.
Our approach relies on probabilistic logic programming, a learning paradigm that enables matching the discrete nature of checklist with continuous-valued data.
We demonstrate that our method outperforms various explainable machine learning techniques on prediction tasks involving image sequences, time series, and clinical notes.
arXiv Detail & Related papers (2024-11-25T09:07:19Z) - A Survey of the Impact of Self-Supervised Pretraining for Diagnostic
Tasks with Radiological Images [71.26717896083433]
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning.
This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging.
arXiv Detail & Related papers (2023-09-05T19:45:09Z) - Conceptualizing Machine Learning for Dynamic Information Retrieval of
Electronic Health Record Notes [6.1656026560972]
This work conceptualizes the use of EHR audit logs for machine learning as a source of supervision of note relevance in a specific clinical context.
We show that our methods can achieve an AUC of 0.963 for predicting which notes will be read in an individual note writing session.
arXiv Detail & Related papers (2023-08-09T21:04:19Z) - Interpretable Medical Diagnostics with Structured Data Extraction by
Large Language Models [59.89454513692417]
Tabular data is often hidden in text, particularly in medical diagnostic reports.
We propose a novel, simple, and effective methodology for extracting structured tabular data from textual medical reports, called TEMED-LLM.
We demonstrate that our approach significantly outperforms state-of-the-art text classification models in medical diagnostics.
arXiv Detail & Related papers (2023-06-08T09:12:28Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Learning Optimal Predictive Checklists [22.91829410102425]
We represent predictive checklists as discrete linear classifiers with binary features and unit weights.
We then learn globally optimal predictive checklists from data by solving an integer programming problem.
Our results show that our method can fit simple predictive checklists that perform well and that can easily be customized to obey a rich class of custom constraints.
arXiv Detail & Related papers (2021-12-02T07:15:28Z) - Attention-based Clinical Note Summarization [1.52292571922932]
We propose a multi-head attention-based mechanism to perform extractive summarization of meaningful phrases in clinical notes.
This method finds major sentences for a summary by correlating tokens, segments and positional embeddings.
arXiv Detail & Related papers (2021-04-18T19:40:26Z) - Does the Magic of BERT Apply to Medical Code Assignment? A Quantitative
Study [2.871614744079523]
It is not clear if pretrained models are useful for medical code prediction without further architecture engineering.
We propose a hierarchical fine-tuning architecture to capture interactions between distant words and adopt label-wise attention to exploit label information.
Contrary to current trends, we demonstrate that a carefully trained classical CNN outperforms attention-based models on a MIMIC-III subset with frequent codes.
arXiv Detail & Related papers (2021-03-11T07:23:45Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.