Soft-Label Anonymous Gastric X-ray Image Distillation
- URL: http://arxiv.org/abs/2104.02857v2
- Date: Thu, 21 Mar 2024 03:21:34 GMT
- Title: Soft-Label Anonymous Gastric X-ray Image Distillation
- Authors: Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama,
- Abstract summary: This paper presents a soft-label anonymous gastric X-ray image distillation method based on a gradient descent approach.
Experimental results show that the proposed method can not only effectively compress the medical dataset but also anonymize medical images to protect the patient's private information.
- Score: 49.24576562557866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a soft-label anonymous gastric X-ray image distillation method based on a gradient descent approach. The sharing of medical data is demanded to construct high-accuracy computer-aided diagnosis (CAD) systems. However, the large size of the medical dataset and privacy protection are remaining problems in medical data sharing, which hindered the research of CAD systems. The idea of our distillation method is to extract the valid information of the medical dataset and generate a tiny distilled dataset that has a different data distribution. Different from model distillation, our method aims to find the optimal distilled images, distilled labels and the optimized learning rate. Experimental results show that the proposed method can not only effectively compress the medical dataset but also anonymize medical images to protect the patient's private information. The proposed approach can improve the efficiency and security of medical data sharing.
Related papers
- Dataset Distillation in Medical Imaging: A Feasibility Study [16.44272552893816]
Data sharing in the medical image analysis field has potential yet remains underappreciated.
One possible solution is to avoid transferring the entire dataset while still achieving similar model performance.
Recent progress in data distillation within computer science offers promising prospects for sharing medical data efficiently.
arXiv Detail & Related papers (2024-07-19T15:59:04Z) - Integration of Self-Supervised BYOL in Semi-Supervised Medical Image Recognition [10.317372960942972]
We propose an innovative approach by integrating self-supervised learning into semi-supervised models to enhance medical image recognition.
Our approach optimally leverages unlabeled data, outperforming existing methods in terms of accuracy for medical image recognition.
arXiv Detail & Related papers (2024-04-16T09:12:16Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Compressed Gastric Image Generation Based on Soft-Label Dataset
Distillation for Medical Data Sharing [38.65823547986758]
Large sizes of medical datasets, the massive amount of memory of saved deep convolutional neural network (DCNN) models, and patients' privacy protection are problems that can lead to inefficient medical data sharing.
This study proposes a novel soft-label dataset distillation method for medical data sharing.
arXiv Detail & Related papers (2022-09-29T08:52:04Z) - Dataset Distillation for Medical Dataset Sharing [38.65823547986758]
dataset distillation can synthesize a small dataset such that models trained on it achieve comparable performance with the original large dataset.
Experimental results on a COVID-19 chest X-ray image dataset show that our method can achieve high detection performance even using scarce anonymized chest X-ray images.
arXiv Detail & Related papers (2022-09-29T07:49:20Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.