Semi-Supervised Active Learning for COVID-19 Lung Ultrasound
Multi-symptom Classification
- URL: http://arxiv.org/abs/2009.05436v2
- Date: Sun, 28 Feb 2021 08:47:52 GMT
- Title: Semi-Supervised Active Learning for COVID-19 Lung Ultrasound
Multi-symptom Classification
- Authors: Lei Liu, Wentao Lei, Yongfang Luo, Cheng Feng, Xiang Wan, Li Liu
- Abstract summary: We propose a novel semi-supervised Two-Stream Active Learning (TSAL) method to model complicated features and reduce labeling costs.
On this basis, a multi-symptom multi-label (MSML) classification network is proposed to learn discriminative features of lung symptoms.
A novel lung US dataset named COVID19-LUSMS is built, currently containing 71 clinical patients with 6,836 images sampled from 678 videos.
- Score: 13.878896181984262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) is a non-invasive yet effective medical diagnostic imaging
technique for the COVID-19 global pandemic. However, due to complex feature
behaviors and expensive annotations of US images, it is difficult to apply
Artificial Intelligence (AI) assisting approaches for lung's multi-symptom
(multi-label) classification. To overcome these difficulties, we propose a
novel semi-supervised Two-Stream Active Learning (TSAL) method to model
complicated features and reduce labeling costs in an iterative procedure. The
core component of TSAL is the multi-label learning mechanism, in which label
correlations information is used to design multi-label margin (MLM) strategy
and confidence validation for automatically selecting informative samples and
confident labels. On this basis, a multi-symptom multi-label (MSML)
classification network is proposed to learn discriminative features of lung
symptoms, and a human-machine interaction is exploited to confirm the final
annotations that are used to fine-tune MSML with progressively labeled data.
Moreover, a novel lung US dataset named COVID19-LUSMS is built, currently
containing 71 clinical patients with 6,836 images sampled from 678 videos.
Experimental evaluations show that TSAL using only 20% data can achieve
superior performance to the baseline and the state-of-the-art. Qualitatively,
visualization of both attention map and sample distribution confirms the good
consistency with the clinic knowledge.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - Semi-Supervised Multimodal Multi-Instance Learning for Aortic Stenosis
Diagnosis [6.356639194509079]
We introduce Semi-supervised Multimodal Multiple-Instance Learning (SMMIL), a new deep learning framework for automatic interpretation for structural heart diseases.
SMMIL can combine information from two input modalities, spectral Dopplers and 2D cineloops, to produce a study-level AS diagnosis.
arXiv Detail & Related papers (2024-03-09T22:23:45Z) - Self-Supervised Multi-Modality Learning for Multi-Label Skin Lesion
Classification [15.757141597485374]
We propose a self-supervised learning algorithm for multi-modality skin lesion classification.
Our algorithm enables the multi-modality learning by maximizing the similarities between paired dermoscopic and clinical images.
Our results show our algorithm achieved better performances than other state-of-the-art SSL counterparts.
arXiv Detail & Related papers (2023-10-28T04:16:08Z) - Improving Multiple Sclerosis Lesion Segmentation Across Clinical Sites:
A Federated Learning Approach with Noise-Resilient Training [75.40980802817349]
Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area.
We introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions.
We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites.
arXiv Detail & Related papers (2023-08-31T00:36:10Z) - Graph-Ensemble Learning Model for Multi-label Skin Lesion Classification
using Dermoscopy and Clinical Images [7.159532626507458]
This study introduces a Graph Convolution Network (GCN) to exploit prior co-occurrence between each category as a correlation matrix into the deep learning model for the multi-label classification.
We propose a Graph-Ensemble Learning Model (GELN) that views the prediction from GCN as complementary information of the predictions from the fusion model.
arXiv Detail & Related papers (2023-07-04T13:19:57Z) - Statistical Dependency Guided Contrastive Learning for Multiple Labeling
in Prenatal Ultrasound [56.631021151764955]
Standard plane recognition plays an important role in prenatal ultrasound (US) screening.
We build a novel multi-label learning scheme to identify multiple standard planes and corresponding anatomical structures simultaneously.
arXiv Detail & Related papers (2021-08-11T06:39:26Z) - Active learning for medical code assignment [55.99831806138029]
We demonstrate the effectiveness of Active Learning (AL) in multi-label text classification in the clinical domain.
We apply a set of well-known AL methods to help automatically assign ICD-9 codes on the MIMIC-III dataset.
Our results show that the selection of informative instances provides satisfactory classification with a significantly reduced training set.
arXiv Detail & Related papers (2021-04-12T18:11:17Z) - RCoNet: Deformable Mutual Information Maximization and High-order
Uncertainty-aware Learning for Robust COVID-19 Detection [12.790651338952005]
The novel 2019 Coronavirus (COVID-19) infection has spread world widely and is currently a major healthcare challenge around the world.
Due to faster imaging time and considerably lower cost than CT, detecting COVID-19 in chest X-ray (CXR) images is preferred for efficient diagnosis, assessment and treatment.
We propose a novel deep network named em RCoNet$k_s$ for robust COVID-19 detection which employs em Deformable Mutual Information Maximization (DeIM), em Mixed High-order Moment Feature (MHMF) and em Multi-
arXiv Detail & Related papers (2021-02-22T15:13:42Z) - Multi-Modal Active Learning for Automatic Liver Fibrosis Diagnosis based
on Ultrasound Shear Wave Elastography [13.13249599000645]
Noninvasive diagnosis like ultrasound (US) imaging plays a very important role in automatic liver fibrosis diagnosis (ALFD)
Due to the noisy data, expensive annotations of US images, the application of Artificial Intelligence (AI) assisting approaches encounters a bottleneck.
In this work, we innovatively propose a multi-modal fusion network with active learning (MMFN-AL) for ALFD to exploit the information of multiple modalities.
arXiv Detail & Related papers (2020-11-02T03:05:24Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.