Single-Image-Based Deep Learning for Segmentation of Early Esophageal
Cancer Lesions
- URL: http://arxiv.org/abs/2306.05912v1
- Date: Fri, 9 Jun 2023 14:06:26 GMT
- Title: Single-Image-Based Deep Learning for Segmentation of Early Esophageal
Cancer Lesions
- Authors: Haipeng Li, Dingrui Liu, Yu Zeng, Shuaicheng Liu, Tao Gan, Nini Rao,
Jinlin Yang, Bing Zeng
- Abstract summary: We present a novel deep learning approach for segmenting EEC lesions.
It relies solely on a single image coming from one patient, forming the so-called "You-Only-Have-One" framework.
We have evaluated YOHO over an EEC data-set created by ourselves and achieved a mean Dice score of 0.888.
- Score: 36.60419108411669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of lesions is crucial for diagnosis and treatment of
early esophageal cancer (EEC). However, neither traditional nor deep
learning-based methods up to today can meet the clinical requirements, with the
mean Dice score - the most important metric in medical image analysis - hardly
exceeding 0.75. In this paper, we present a novel deep learning approach for
segmenting EEC lesions. Our approach stands out for its uniqueness, as it
relies solely on a single image coming from one patient, forming the so-called
"You-Only-Have-One" (YOHO) framework. On one hand, this "one-image-one-network"
learning ensures complete patient privacy as it does not use any images from
other patients as the training data. On the other hand, it avoids nearly all
generalization-related problems since each trained network is applied only to
the input image itself. In particular, we can push the training to
"over-fitting" as much as possible to increase the segmentation accuracy. Our
technical details include an interaction with clinical physicians to utilize
their expertise, a geometry-based rendering of a single lesion image to
generate the training set (the \emph{biggest} novelty), and an edge-enhanced
UNet. We have evaluated YOHO over an EEC data-set created by ourselves and
achieved a mean Dice score of 0.888, which represents a significant advance
toward clinical applications.
Related papers
- A Continual Learning-driven Model for Accurate and Generalizable Segmentation of Clinically Comprehensive and Fine-grained Whole-body Anatomies in CT [67.34586036959793]
There is no fully annotated CT dataset with all anatomies delineated for training.
We propose a novel continual learning-driven CT model that can segment complete anatomies.
Our single unified CT segmentation model, CL-Net, can highly accurately segment a clinically comprehensive set of 235 fine-grained whole-body anatomies.
arXiv Detail & Related papers (2025-03-16T23:55:02Z) - A Simple Framework Uniting Visual In-context Learning with Masked Image
Modeling to Improve Ultrasound Segmentation [0.6223528900192875]
Visual in-context learning (ICL) is a new and exciting area of research in computer vision.
We propose a new simple visual ICL method called SimICL, combining visual ICL pairing images with masked image modeling (MIM) designed for self-supervised learning.
SimICL achieved a remarkably high Dice coeffient (DC) of 0.96 and Jaccard Index (IoU) of 0.92, surpassing state-of-the-art segmentation and visual ICL models.
arXiv Detail & Related papers (2024-02-22T05:34:22Z) - FBA-Net: Foreground and Background Aware Contrastive Learning for
Semi-Supervised Atrium Segmentation [10.11072886547561]
We propose a contrastive learning strategy of foreground and background representations for semi-supervised 3D medical image segmentation.
Our framework has the potential to advance the field of semi-supervised 3D medical image segmentation.
arXiv Detail & Related papers (2023-06-27T04:14:50Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation [0.16490701092527607]
We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
arXiv Detail & Related papers (2022-07-17T13:28:52Z) - Efficient and Generic Interactive Segmentation Framework to Correct
Mispredictions during Clinical Evaluation of Medical Images [32.00559434186769]
We suggest a novel conditional inference technique for deep neural networks (DNNs)
Unlike other methods, our approach can correct multiple structures simultaneously and add structures missed at initial segmentation.
Our method can be useful to clinicians for diagnosis and post-surgical follow-up with minimal intervention from the medical expert.
arXiv Detail & Related papers (2021-08-06T08:06:18Z) - Self-Supervised Learning from Unlabeled Fundus Photographs Improves
Segmentation of the Retina [4.815051667870375]
Fundus photography is the primary method for retinal imaging and essential for diabetic retinopathy prevention.
Current segmentation methods are not robust towards the diversity in imaging conditions and pathologies typical for real-world clinical applications.
We utilize contrastive self-supervised learning to exploit the large variety of unlabeled fundus images in the publicly available EyePACS dataset.
arXiv Detail & Related papers (2021-08-05T18:02:56Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.