SSLM: Self-Supervised Learning for Medical Diagnosis from MR Video
- URL: http://arxiv.org/abs/2104.10481v2
- Date: Thu, 22 Apr 2021 05:05:22 GMT
- Title: SSLM: Self-Supervised Learning for Medical Diagnosis from MR Video
- Authors: Siladittya Manna, Saumik Bhattacharya, Umapada Pal
- Abstract summary: In this paper, we propose a self-supervised learning approach to learn the spatial anatomical representations from magnetic resonance (MR) video clips.
The proposed pretext model learns meaningful spatial context-invariant representations.
Different experiments show that the features learnt by the pretext model provide explainable performance in the downstream task.
- Score: 19.5917119072985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In medical image analysis, the cost of acquiring high-quality data and their
annotation by experts is a barrier in many medical applications. Most of the
techniques used are based on supervised learning framework and need a large
amount of annotated data to achieve satisfactory performance. As an
alternative, in this paper, we propose a self-supervised learning approach to
learn the spatial anatomical representations from the frames of magnetic
resonance (MR) video clips for the diagnosis of knee medical conditions. The
pretext model learns meaningful spatial context-invariant representations. The
downstream task in our paper is a class imbalanced multi-label classification.
Different experiments show that the features learnt by the pretext model
provide explainable performance in the downstream task. Moreover, the
efficiency and reliability of the proposed pretext model in learning
representations of minority classes without applying any strategy towards
imbalance in the dataset can be seen from the results. To the best of our
knowledge, this work is the first work of its kind in showing the effectiveness
and reliability of self-supervised learning algorithms in class imbalanced
multi-label classification tasks on MR video.
The code for evaluation of the proposed work is available at
https://github.com/sadimanna/sslm
Related papers
- OPTiML: Dense Semantic Invariance Using Optimal Transport for Self-Supervised Medical Image Representation [6.4136876268620115]
Self-supervised learning (SSL) has emerged as a promising technique for medical image analysis due to its ability to learn without annotations.
We introduce a novel SSL framework OPTiML, employing optimal transport (OT), to capture the dense semantic invariance and fine-grained details.
Our empirical results reveal OPTiML's superiority over state-of-the-art methods across all evaluated tasks.
arXiv Detail & Related papers (2024-04-18T02:59:48Z) - Overcoming Dimensional Collapse in Self-supervised Contrastive Learning
for Medical Image Segmentation [2.6764957223405657]
We investigate the application of contrastive learning to the domain of medical image analysis.
Our findings reveal that MoCo v2, a state-of-the-art contrastive learning method, encounters dimensional collapse when applied to medical images.
To address this, we propose two key contributions: local feature learning and feature decorrelation.
arXiv Detail & Related papers (2024-02-22T15:02:13Z) - M-VAAL: Multimodal Variational Adversarial Active Learning for
Downstream Medical Image Analysis Tasks [16.85572580186212]
Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation.
We propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling.
arXiv Detail & Related papers (2023-06-21T16:40:37Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - CLINICAL: Targeted Active Learning for Imbalanced Medical Image
Classification [12.576168993188315]
It is often the case that a suboptimal performance is obtained on some classes due to the natural class imbalance issue that comes with medical data.
We propose Clinical, a framework that uses submodular mutual information functions as acquisition functions to mine critical data points from rare classes.
We show that Clinical outperforms the state-of-the-art active learning methods by acquiring a diverse set of data points that belong to the rare classes.
arXiv Detail & Related papers (2022-10-04T10:57:05Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Self-Supervised Representation Learning for Detection of ACL Tear Injury
in Knee MR Videos [18.54362818156725]
We propose a self-supervised learning approach to learn transferable features from MR video clips by enforcing the model to learn anatomical features.
To the best of our knowledge, none of the supervised learning models performing injury classification task from MR video provide any explanation for the decisions made by the models.
arXiv Detail & Related papers (2020-07-15T15:35:47Z) - LRTD: Long-Range Temporal Dependency based Active Learning for Surgical
Workflow Recognition [67.86810761677403]
We propose a novel active learning method for cost-effective surgical video analysis.
Specifically, we propose a non-local recurrent convolutional network (NL-RCNet), which introduces non-local block to capture the long-range temporal dependency.
We validate our approach on a large surgical video dataset (Cholec80) by performing surgical workflow recognition task.
arXiv Detail & Related papers (2020-04-21T09:21:22Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.