Pristine annotations-based multi-modal trained artificial intelligence
solution to triage chest X-ray for COVID-19
- URL: http://arxiv.org/abs/2011.05186v1
- Date: Tue, 10 Nov 2020 15:36:08 GMT
- Title: Pristine annotations-based multi-modal trained artificial intelligence
solution to triage chest X-ray for COVID-19
- Authors: Tao Tan, Bipul Das, Ravi Soni, Mate Fejes, Sohan Ranjan, Daniel Attila
Szabo, Vikram Melapudi, K S Shriram, Utkarsh Agrawal, Laszlo Rusko, Zita
Herczeg, Barbara Darazs, Pal Tegzes, Lehel Ferenczi, Rakesh Mullick, Gopal
Avinash
- Abstract summary: COVID-19 pandemic continues to spread and impact the well-being of the global population.
Front-line modalities including computed tomography (CT) and X-ray play an important role for triaging COVID patients.
Considering the limited access of resources (both hardware and trained personnel) and decontamination considerations, CT may not be ideal for triaging suspected subjects.
- Score: 1.1764495014312295
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The COVID-19 pandemic continues to spread and impact the well-being of the
global population. The front-line modalities including computed tomography (CT)
and X-ray play an important role for triaging COVID patients. Considering the
limited access of resources (both hardware and trained personnel) and
decontamination considerations, CT may not be ideal for triaging suspected
subjects. Artificial intelligence (AI) assisted X-ray based applications for
triaging and monitoring require experienced radiologists to identify COVID
patients in a timely manner and to further delineate the disease region
boundary are seen as a promising solution. Our proposed solution differs from
existing solutions by industry and academic communities, and demonstrates a
functional AI model to triage by inferencing using a single x-ray image, while
the deep-learning model is trained using both X-ray and CT data. We report on
how such a multi-modal training improves the solution compared to X-ray only
training. The multi-modal solution increases the AUC (area under the receiver
operating characteristic curve) from 0.89 to 0.93 and also positively impacts
the Dice coefficient (0.59 to 0.62) for localizing the pathology. To the best
our knowledge, it is the first X-ray solution by leveraging multi-modal
information for the development.
Related papers
- Unsupervised Training of Neural Cellular Automata on Edge Devices [2.5462695047893025]
We implement Cellular Automata training directly on smartphones for accessible X-ray lung segmentation.
We confirm the practicality and feasibility of deploying and training these advanced models on five Android devices.
In extreme cases where no digital copy is available and images must be captured by a phone from an X-ray lightbox or monitor, VWSL enhances Dice accuracy by 5-20%.
arXiv Detail & Related papers (2024-07-25T15:21:54Z) - Advancing human-centric AI for robust X-ray analysis through holistic self-supervised learning [33.9544297423474]
We present RayDINO, a large visual encoder trained by self-supervision on 873k chest X-rays.
We compare RayDINO to previous state-of-the-art models across nine radiology tasks, from classification and dense segmentation to text generation.
Our findings suggest that self-supervision allows patient-centric AI proving useful in clinical and interpreting X-rays holistically.
arXiv Detail & Related papers (2024-05-02T16:59:10Z) - A Deep Learning Technique using a Sequence of Follow Up X-Rays for
Disease classification [3.3345134768053635]
The ability to predict lung and heart based diseases using deep learning techniques is central to many researchers.
We present a hypothesis that X-rays of patients included with the follow up history of their most recent three chest X-ray images would perform better in disease classification.
arXiv Detail & Related papers (2022-03-28T19:58:47Z) - Advancing COVID-19 Diagnosis with Privacy-Preserving Collaboration in
Artificial Intelligence [79.038671794961]
We launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution.
Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK.
arXiv Detail & Related papers (2021-11-18T00:43:41Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep
Convolutional Neural Network Design for Detection of COVID-19 Patient Cases
from Point-of-care Ultrasound Imaging [101.27276001592101]
We introduce COVID-Net US, a highly efficient, self-attention deep convolutional neural network design tailored for COVID-19 screening from lung POCUS images.
Experimental results show that the proposed COVID-Net US can achieve an AUC of over 0.98 while achieving 353X lower architectural complexity, 62X lower computational complexity, and 14.3X faster inference times on a Raspberry Pi.
To advocate affordable healthcare and artificial intelligence for resource-constrained environments, we have made COVID-Net US open source and publicly available as part of the COVID-Net open source initiative.
arXiv Detail & Related papers (2021-08-05T16:47:33Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - COVID-19 identification from volumetric chest CT scans using a
progressively resized 3D-CNN incorporating segmentation, augmentation, and
class-rebalancing [4.446085353384894]
COVID-19 is a global pandemic disease overgrowing worldwide.
Computer-aided screening tools with greater sensitivity is imperative for disease diagnosis and prognosis.
This article proposes a 3D Convolutional Neural Network (CNN)-based classification approach.
arXiv Detail & Related papers (2021-02-11T18:16:18Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Exploration of Interpretability Techniques for Deep COVID-19
Classification using Chest X-ray Images [10.01138352319106]
Five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their Ensemble have been used in this paper to classify COVID-19, pneumoniae and healthy subjects using Chest X-Ray images.
The mean Micro-F1 score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and is 0.89 for the Ensemble of the network models.
arXiv Detail & Related papers (2020-06-03T22:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.