Patients' Severity States Classification based on Electronic Health
Record (EHR) Data using Multiple Machine Learning and Deep Learning
Approaches
- URL: http://arxiv.org/abs/2209.14907v1
- Date: Thu, 29 Sep 2022 16:14:02 GMT
- Title: Patients' Severity States Classification based on Electronic Health
Record (EHR) Data using Multiple Machine Learning and Deep Learning
Approaches
- Authors: A. N. M. Sajedul Alam, Rimi Reza, Asir Abrar, Tanvir Ahmed, Salsabil
Ahmed, Shihab Sharar, Annajiat Alim Rasel
- Abstract summary: This research presents an examination of categorizing the severity states of patients based on their electronic health records.
The suggested method uses an EHR dataset collected from an open-source platform to categorize severity.
- Score: 0.8312466807725921
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research presents an examination of categorizing the severity states of
patients based on their electronic health records during a certain time range
using multiple machine learning and deep learning approaches. The suggested
method uses an EHR dataset collected from an open-source platform to categorize
severity. Some tools were used in this research, such as openRefine was used to
pre-process, RapidMiner was used for implementing three algorithms (Fast Large
Margin, Generalized Linear Model, Multi-layer Feed-forward Neural Network) and
Tableau was used to visualize the data, for implementation of algorithms we
used Google Colab. Here we implemented several supervised and unsupervised
algorithms along with semi-supervised and deep learning algorithms. The
experimental results reveal that hyperparameter-tuned Random Forest
outperformed all the other supervised machine learning algorithms with 76%
accuracy as well as Generalized Linear algorithm achieved the highest precision
score 78%, whereas the hyperparameter-tuned Hierarchical Clustering with 86%
precision score and Gaussian Mixture Model with 61% accuracy outperformed other
unsupervised approaches. Dimensionality Reduction improved results a lot for
most unsupervised techniques. For implementing Deep Learning we employed a
feed-forward neural network (multi-layer) and the Fast Large Margin approach
for semi-supervised learning. The Fast Large Margin performed really well with
a recall score of 84% and an F1 score of 78%. Finally, the Multi-layer
Feed-forward Neural Network performed admirably with 75% accuracy, 75%
precision, 87% recall, 81% F1 score.
Related papers
- Deep learning-driven scheduling algorithm for a single machine problem
minimizing the total tardiness [0.0]
We propose a deep neural network that acts as a decomposition-time estimator of the criterion value used in a single-pass scheduling algorithm.
We show that our machine learning-driven approach can efficiently generalize information from the training phase to significantly larger instances.
arXiv Detail & Related papers (2024-02-19T15:34:09Z) - Confidence-Nets: A Step Towards better Prediction Intervals for
regression Neural Networks on small datasets [0.0]
We propose an ensemble method that attempts to estimate the uncertainty of predictions, increase their accuracy and provide an interval for the expected variation.
The proposed method is tested on various datasets, and a significant improvement in the performance of the neural network model is seen.
arXiv Detail & Related papers (2022-10-31T06:38:40Z) - Classification and Self-Supervised Regression of Arrhythmic ECG Signals
Using Convolutional Neural Networks [13.025714736073489]
We propose a deep neural network model capable of solving regression and classification tasks.
We tested the model on the MIT-BIH Arrhythmia database.
arXiv Detail & Related papers (2022-10-25T18:11:13Z) - MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data
Challenge [110.7678032481059]
We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1).
For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise.
Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings.
arXiv Detail & Related papers (2022-09-22T16:44:59Z) - Machine Learning Methods for Spectral Efficiency Prediction in Massive
MIMO Systems [0.0]
We study several machine learning approaches to solve the problem of estimating the spectral efficiency (SE) value for a certain precoding scheme, preferably in the shortest possible time.
The best results in terms of mean average percentage error (MAPE) are obtained with gradient boosting over sorted features, while linear models demonstrate worse prediction quality.
We investigate the practical applicability of the proposed algorithms in a wide range of scenarios generated by the Quadriga simulator.
arXiv Detail & Related papers (2021-12-29T07:03:10Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Fast accuracy estimation of deep learning based multi-class musical
source separation [79.10962538141445]
We propose a method to evaluate the separability of instruments in any dataset without training and tuning a neural network.
Based on the oracle principle with an ideal ratio mask, our approach is an excellent proxy to estimate the separation performances of state-of-the-art deep learning approaches.
arXiv Detail & Related papers (2020-10-19T13:05:08Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - ROAM: Random Layer Mixup for Semi-Supervised Learning in Medical Imaging [43.26668942258135]
Medical image segmentation is one of the major challenges addressed by machine learning methods.
We propose ROAM, a RandOm lAyer Mixup, which generates more data points that have never seen before.
ROAM achieves state-of-the-art (SOTA) results in fully supervised (89.5%) and semi-supervised (87.0%) settings with a relative improvement of up to 2.40% and 16.50%, respectively for the whole-brain segmentation.
arXiv Detail & Related papers (2020-03-20T18:07:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.