X-Ray bone abnormalities detection using MURA dataset
- URL: http://arxiv.org/abs/2008.03356v1
- Date: Fri, 7 Aug 2020 19:58:56 GMT
- Title: X-Ray bone abnormalities detection using MURA dataset
- Authors: A.Solovyova, I.Solovyov
- Abstract summary: We introduce the deep network trained on the MURA dataset from the Stanford University released in 2017.
Our system is able to detect bone abnormalities on the radiographs and visualise such zones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the deep network trained on the MURA dataset from the Stanford
University released in 2017. Our system is able to detect bone abnormalities on
the radiographs and visualise such zones. We found that our solution has the
accuracy comparable to the best results that have been achieved by other
development teams that used MURA dataset, in particular the overall Kappa score
that was achieved by our team is about 0.942 on the wrist, 0.862 on the hand
and o.735 on the shoulder (compared to the best available results to this
moment on the official web-site 0.931, 0.851 and 0.729 accordingly). However,
despite the good results there are a lot of directions for the future
enhancement of the proposed technology. We see a big potential in the further
development computer aided systems (CAD) for the radiographs as the one that
will help practical specialists diagnose bone fractures as well as bone
oncology cases faster and with the higher accuracy.
Related papers
- DDxT: Deep Generative Transformer Models for Differential Diagnosis [51.25660111437394]
We show that a generative approach trained with simpler supervised and self-supervised learning signals can achieve superior results on the current benchmark.
The proposed Transformer-based generative network, named DDxT, autoregressively produces a set of possible pathologies, i.e., DDx, and predicts the actual pathology using a neural network.
arXiv Detail & Related papers (2023-12-02T22:57:25Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Learning to diagnose common thorax diseases on chest radiographs from
radiology reports in Vietnamese [0.33598755777055367]
We propose a data collecting and annotation pipeline that extracts information from Vietnamese radiology reports to provide accurate labels for chest X-ray (CXR) images.
This can benefit Vietnamese radiologists and clinicians by annotating data that closely match their endemic diagnosis categories which may vary from country to country.
arXiv Detail & Related papers (2022-09-11T06:06:03Z) - Multi-Label Classification of Thoracic Diseases using Dense Convolutional Network on Chest Radiographs [0.0]
We propose a multi-label disease prediction model that allows the detection of more than one pathology at a given test time.
Our proposed model achieved the highest AUC score of 0.896 for the condition Cardiomegaly.
arXiv Detail & Related papers (2022-02-08T00:43:57Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Deep Learning for fully automatic detection, segmentation, and Gleason
Grade estimation of prostate cancer in multiparametric Magnetic Resonance
Images [0.731365367571807]
This paper proposes a fully automatic system based on Deep Learning that takes a prostate mpMRI from a PCa-suspect patient.
It locates PCa lesions, segments them, and predicts their most likely Gleason grade group (GGG)
The code for the ProstateX-trained system has been made openly available at https://github.com/OscarPellicer/prostate_lesion_detection.
arXiv Detail & Related papers (2021-03-23T16:08:43Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z) - Automating Abnormality Detection in Musculoskeletal Radiographs through
Deep Learning [0.0]
MuRAD is a tool that can help radiologists automate the detection of abnormalities in musculoskeletal radiographs (bone X-rays)
MuRAD utilizes a Convolutional Neural Network (CNN) that can accurately predict whether a bone X-ray is abnormal.
MuRAD achieves an F1 score of 0.822 and a Cohen's kappa of 0.699, which is comparable to the performance of expert radiologists.
arXiv Detail & Related papers (2020-10-21T01:48:56Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - A Computer-Aided Diagnosis System Using Artificial Intelligence for Hip
Fractures -Multi-Institutional Joint Development Research- [12.529791744398596]
We developed a Computer-aided diagnosis system for plane frontal hip X-rays with a deep learning model trained on a large dataset collected at multiple centers.
The diagnostic accuracy of the learning model was 96. 1 %, sensitivity of 95.2 %, specificity of 96.9 %, F-value of 0.961, and AUC of 0.99.
arXiv Detail & Related papers (2020-03-11T11:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.