Prediction of MRI Hardware Failures based on Image Features using
Ensemble Learning
- URL: http://arxiv.org/abs/2001.01213v1
- Date: Sun, 5 Jan 2020 11:21:28 GMT
- Title: Prediction of MRI Hardware Failures based on Image Features using
Ensemble Learning
- Authors: Nadine Kuhnert, Lea Pfl\"uger, Andreas Maier
- Abstract summary: In this work, we focus on predicting failures of 20-channel Head/Neck coils using image-related measurements.
To solve this problem, we use data of two different levels. One level refers to one-dimensional features per individual coil channel on which we found a fully connected neural network to perform best.
The other data level uses matrices which represent the overall coil condition and feeds a different neural network.
- Score: 8.889876750552615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to ensure trouble-free operation, prediction of hardware failures is
essential. This applies especially to medical systems. Our goal is to determine
hardware which needs to be exchanged before failing. In this work, we focus on
predicting failures of 20-channel Head/Neck coils using image-related
measurements. Thus, we aim to solve a classification problem with two classes,
normal and broken coil. To solve this problem, we use data of two different
levels. One level refers to one-dimensional features per individual coil
channel on which we found a fully connected neural network to perform best. The
other data level uses matrices which represent the overall coil condition and
feeds a different neural network. We stack the predictions of those two
networks and train a Random Forest classifier as the ensemble learner. Thus,
combining insights of both trained models improves the prediction results and
allows us to determine the coil's condition with an F-score of 94.14% and an
accuracy of 99.09%.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Recurrent and Convolutional Neural Networks in Classification of EEG Signal for Guided Imagery and Mental Workload Detection [0.9895793818721335]
This paper presents the results of the investigations of a cohort of 26 students exposed to Guided Imagery relaxation technique and mental task workloads conducted with the use of dense array electroencephalographic amplifier.
arXiv Detail & Related papers (2024-05-27T07:49:30Z) - Utilizing Machine Learning and 3D Neuroimaging to Predict Hearing Loss: A Comparative Analysis of Dimensionality Reduction and Regression Techniques [0.0]
We have explored machine learning approaches for predicting hearing loss thresholds on the brain's gray matter 3D images.
In the first phase, we used a 3D CNN model to reduce high-dimensional input into latent space.
In the second phase, we utilized this model to reduce input into rich features.
arXiv Detail & Related papers (2024-04-30T18:39:41Z) - Cross-dataset domain adaptation for the classification COVID-19 using
chest computed tomography images [0.6798775532273751]
COVID19-DANet is based on pre-trained CNN backbone for feature extraction.
It is tested under four cross-dataset scenarios using the SARS-CoV-2-CT and COVID19-CT datasets.
arXiv Detail & Related papers (2023-11-14T20:36:34Z) - Can neural networks count digit frequency? [16.04455549316468]
We compare the performance of different classical machine learning models and neural networks in identifying the frequency of occurrence of each digit in a given number.
We observe that the neural networks significantly outperform the classical machine learning models in terms of both the regression and classification metrics for both the 6-digit and 10-digit number.
arXiv Detail & Related papers (2023-09-25T03:45:36Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Anomaly Detection using Capsule Networks for High-dimensional Datasets [0.0]
This study uses a capsule network for the anomaly detection task.
To the best of our knowledge, this is the first instance where a capsule network is analyzed for the anomaly detection task in a high-dimensional complex data setting.
arXiv Detail & Related papers (2021-12-27T05:07:02Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Deep F-measure Maximization for End-to-End Speech Understanding [52.36496114728355]
We propose a differentiable approximation to the F-measure and train the network with this objective using standard backpropagation.
We perform experiments on two standard fairness datasets, Adult, Communities and Crime, and also on speech-to-intent detection on the ATIS dataset and speech-to-image concept classification on the Speech-COCO dataset.
In all four of these tasks, F-measure results in improved micro-F1 scores, with absolute improvements of up to 8% absolute, as compared to models trained with the cross-entropy loss function.
arXiv Detail & Related papers (2020-08-08T03:02:27Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.