Semixup: In- and Out-of-Manifold Regularization for Deep Semi-Supervised
Knee Osteoarthritis Severity Grading from Plain Radiographs
- URL: http://arxiv.org/abs/2003.01944v3
- Date: Wed, 12 Aug 2020 09:44:29 GMT
- Title: Semixup: In- and Out-of-Manifold Regularization for Deep Semi-Supervised
Knee Osteoarthritis Severity Grading from Plain Radiographs
- Authors: Huy Hoang Nguyen, Simo Saarakkala, Matthew Blaschko, Aleksei Tiulpin
- Abstract summary: Knee osteoarthritis (OA) is one of the highest disability factors in the world.
Deep learning methods can reliably perform the OA severity assessment according to the gold standard Kellgren-Lawrence (KL) grading system.
We propose the Semixup algorithm, a semi-supervised learning (SSL) approach to leverage unlabeled data.
- Score: 3.0969191504482247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knee osteoarthritis (OA) is one of the highest disability factors in the
world. This musculoskeletal disorder is assessed from clinical symptoms, and
typically confirmed via radiographic assessment. This visual assessment done by
a radiologist requires experience, and suffers from moderate to high
inter-observer variability. The recent literature has shown that deep learning
methods can reliably perform the OA severity assessment according to the gold
standard Kellgren-Lawrence (KL) grading system. However, these methods require
large amounts of labeled data, which are costly to obtain. In this study, we
propose the Semixup algorithm, a semi-supervised learning (SSL) approach to
leverage unlabeled data. Semixup relies on consistency regularization using in-
and out-of-manifold samples, together with interpolated consistency. On an
independent test set, our method significantly outperformed other
state-of-the-art SSL methods in most cases. Finally, when compared to a
well-tuned fully supervised baseline that yielded a balanced accuracy (BA) of
$70.9\pm0.8%$ on the test set, Semixup had comparable performance -- BA of
$71\pm0.8%$ $(p=0.368)$ while requiring $6$ times less labeled data. These
results show that our proposed SSL method allows building fully automatic OA
severity assessment tools with datasets that are available outside research
settings.
Related papers
- An AI System for Continuous Knee Osteoarthritis Severity Grading Using Self-Supervised Anomaly Detection with Limited Data [0.30723404270319693]
This work proposes a three stage approach for automated continuous grading of knee OA.
It learns a robust representation of healthy knee X-rays and grading disease severity based on its distance to the centre of normality.
The proposed methodology outperforms existing techniques by margins of up to 24% in terms of OA detection and the disease severity scores correlate with the Kellgren-Lawrence grading system at the same level as human expert performance.
arXiv Detail & Related papers (2024-07-16T08:37:33Z) - Robust Semi-Supervised Learning for Histopathology Images through
Self-Supervision Guided Out-of-Distribution Scoring [1.8558180119033003]
We propose a novel pipeline for addressing open-set supervised learning challenges in digital histology images.
Our pipeline efficiently estimates an OOD score for each unlabelled data point based on self-supervised learning.
Our framework is compatible with any semi-SL framework, and we base our experiments on the popular Mixmatch semi-SL framework.
arXiv Detail & Related papers (2023-03-17T12:38:28Z) - Deep Reinforcement Learning for Cost-Effective Medical Diagnosis [41.10546022107126]
We use reinforcement learning to find a dynamic policy that selects lab test panels sequentially based on previous observations.
We propose a Semi-Model-based Deep Diagnosis Policy Optimization framework that is compatible with end-to-end training and online learning.
SM-DDPO is tested on diverse clinical tasks: ferritin abnormality detection, sepsis mortality prediction, and acute kidney injury diagnosis.
arXiv Detail & Related papers (2023-02-20T19:47:25Z) - Learning to diagnose cirrhosis from radiological and histological labels
with joint self and weakly-supervised pretraining strategies [62.840338941861134]
We propose to leverage transfer learning from large datasets annotated by radiologists, to predict the histological score available on a small annex dataset.
We compare different pretraining methods, namely weakly-supervised and self-supervised ones, to improve the prediction of the cirrhosis.
This method outperforms the baseline classification of the METAVIR score, reaching an AUC of 0.84 and a balanced accuracy of 0.75.
arXiv Detail & Related papers (2023-02-16T17:06:23Z) - An interpretable machine learning system for colorectal cancer diagnosis from pathology slides [2.7968867060319735]
This study is conducted with one of the largest WSI colorectal samples dataset with approximately 10,500 WSIs.
Our proposed method predicts, for the patch-based tiles, a class based on the severity of the dysplasia.
It is trained with an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists.
arXiv Detail & Related papers (2023-01-06T17:10:32Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - ADT-SSL: Adaptive Dual-Threshold for Semi-Supervised Learning [68.53717108812297]
Semi-Supervised Learning (SSL) has advanced classification tasks by inputting both labeled and unlabeled data to train a model jointly.
This paper proposes an Adaptive Dual-Threshold method for Semi-Supervised Learning (ADT-SSL)
Experimental results show that the proposed ADT-SSL achieves state-of-the-art classification accuracy.
arXiv Detail & Related papers (2022-05-21T11:52:08Z) - Building Brains: Subvolume Recombination for Data Augmentation in Large
Vessel Occlusion Detection [56.67577446132946]
A large training data set is required for a standard deep learning-based model to learn this strategy from data.
We propose an augmentation method that generates artificial training samples by recombining vessel tree segmentations of the hemispheres from different patients.
In line with the augmentation scheme, we use a 3D-DenseNet fed with task-specific input, fostering a side-by-side comparison between the hemispheres.
arXiv Detail & Related papers (2022-05-05T10:31:57Z) - Assessment of Treatment Effect Estimators for Heavy-Tailed Data [70.72363097550483]
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance.
We provide a novel cross-validation-like methodology to address this challenge.
We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain.
arXiv Detail & Related papers (2021-12-14T17:53:01Z) - Federated Deep AUC Maximization for Heterogeneous Data with a Constant
Communication Complexity [77.78624443410216]
We propose improved FDAM algorithms for detecting heterogeneous chest data.
A result of this paper is that the communication of the proposed algorithm is strongly independent of the number of machines and also independent of the accuracy level.
Experiments have demonstrated the effectiveness of our FDAM algorithm on benchmark datasets and on medical chest Xray images from different organizations.
arXiv Detail & Related papers (2021-02-09T04:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.