Learning Better Contrastive View from Radiologist's Gaze
- URL: http://arxiv.org/abs/2305.08826v1
- Date: Mon, 15 May 2023 17:34:49 GMT
- Title: Learning Better Contrastive View from Radiologist's Gaze
- Authors: Sheng Wang, Zixu Zhuang, Xi Ouyang, Lichi Zhang, Zheren Li, Chong Ma,
Tianming Liu, Dinggang Shen, Qian Wang
- Abstract summary: We propose a novel augmentation method, i.e., FocusContrast, to learn from radiologists' gaze in diagnosis and generate contrastive views for medical images.
Specifically, we track the gaze movement of radiologists and model their visual attention when reading to diagnose X-ray images.
As a plug-and-play module, FocusContrast consistently improves state-of-the-art contrastive learning methods of SimCLR, MoCo, and BYOL by 4.07.0% in classification accuracy on a knee X-ray dataset.
- Score: 45.55702035003462
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent self-supervised contrastive learning methods greatly benefit from the
Siamese structure that aims to minimizing distances between positive pairs.
These methods usually apply random data augmentation to input images, expecting
the augmented views of the same images to be similar and positively paired.
However, random augmentation may overlook image semantic information and
degrade the quality of augmented views in contrastive learning. This issue
becomes more challenging in medical images since the abnormalities related to
diseases can be tiny, and are easy to be corrupted (e.g., being cropped out) in
the current scheme of random augmentation. In this work, we first demonstrate
that, for widely-used X-ray images, the conventional augmentation prevalent in
contrastive pre-training can affect the performance of the downstream diagnosis
or classification tasks. Then, we propose a novel augmentation method, i.e.,
FocusContrast, to learn from radiologists' gaze in diagnosis and generate
contrastive views for medical images with guidance from radiologists' visual
attention. Specifically, we track the gaze movement of radiologists and model
their visual attention when reading to diagnose X-ray images. The learned model
can predict visual attention of the radiologists given a new input image, and
further guide the attention-aware augmentation that hardly neglects the
disease-related abnormalities. As a plug-and-play and framework-agnostic
module, FocusContrast consistently improves state-of-the-art contrastive
learning methods of SimCLR, MoCo, and BYOL by 4.0~7.0% in classification
accuracy on a knee X-ray dataset.
Related papers
- View it like a radiologist: Shifted windows for deep learning
augmentation of CT images [11.902593645631034]
We propose a novel preprocessing and intensity augmentation scheme inspired by how radiologists leverage multiple viewing windows when evaluating CT images.
Our proposed method, window shifting, randomly places the viewing windows around the region of interest during training.
This approach improves liver lesion segmentation performance and robustness on images with poorly timed contrast agent.
arXiv Detail & Related papers (2023-11-25T10:28:08Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Exploring Image Augmentations for Siamese Representation Learning with
Chest X-Rays [0.8808021343665321]
We train and evaluate Siamese Networks for abnormality detection on chest X-Rays.
We identify a set of augmentations that yield robust representations that generalize well to both out-of-distribution data and diseases.
arXiv Detail & Related papers (2023-01-30T03:42:02Z) - Artificial Intelligence for Automatic Detection and Classification
Disease on the X-Ray Images [0.0]
This work presents rapid detection of diseases in the lung using the efficient Deep learning pre-trained RepVGG algorithm.
We are applying Artificial Intelligence technology for automatic highlighted detection of affected areas of people's lungs.
arXiv Detail & Related papers (2022-11-14T03:51:12Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Cross Chest Graph for Disease Diagnosis with Structural Relational
Reasoning [2.7148274921314615]
Locating lesions is important in the computer-aided diagnosis of X-ray images.
General weakly-supervised methods have failed to consider the characteristics of X-ray images.
We propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection.
arXiv Detail & Related papers (2021-01-22T08:24:04Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.