LightVessel: Exploring Lightweight Coronary Artery Vessel Segmentation
via Similarity Knowledge Distillation
- URL: http://arxiv.org/abs/2211.00899v1
- Date: Wed, 2 Nov 2022 05:49:19 GMT
- Title: LightVessel: Exploring Lightweight Coronary Artery Vessel Segmentation
via Similarity Knowledge Distillation
- Authors: Hao Dang, Yuekai Zhang, Xingqun Qi, Wanting Zhou, Muyi Sun
- Abstract summary: We propose textbfLightVessel, a Similarity Knowledge Distillation Framework, for lightweight coronary artery vessel segmentation.
FSD module for semantic-shift modeling; Adversarial Similarity Distillation (ASD) module for encouraging the student model to learn more pixel-wise semantic information.
Experiments conducted on Clinical Coronary Artery Vessel dataset demonstrate that LightVessel outperforms various knowledge distillation counterparts.
- Score: 6.544757635738911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep convolution neural networks (DCNNs) have achieved great
prospects in coronary artery vessel segmentation. However, it is difficult to
deploy complicated models in clinical scenarios since high-performance
approaches have excessive parameters and high computation costs. To tackle this
problem, we propose \textbf{LightVessel}, a Similarity Knowledge Distillation
Framework, for lightweight coronary artery vessel segmentation. Primarily, we
propose a Feature-wise Similarity Distillation (FSD) module for semantic-shift
modeling. Specifically, we calculate the feature similarity between the
symmetric layers from the encoder and decoder. Then the similarity is
transferred as knowledge from a cumbersome teacher network to a non-trained
lightweight student network. Meanwhile, for encouraging the student model to
learn more pixel-wise semantic information, we introduce the Adversarial
Similarity Distillation (ASD) module. Concretely, the ASD module aims to
construct the spatial adversarial correlation between the annotation and
prediction from the teacher and student models, respectively. Through the ASD
module, the student model obtains fined-grained subtle edge segmented results
of the coronary artery vessel. Extensive experiments conducted on Clinical
Coronary Artery Vessel Dataset demonstrate that LightVessel outperforms various
knowledge distillation counterparts.
Related papers
- A label-free and data-free training strategy for vasculature segmentation in serial sectioning OCT data [4.746694624239095]
Serial sectioning Optical Coherence Tomography (s OCT) is becoming increasingly popular to study post-mortem neurovasculature.
Here, we leverage synthetic datasets of vessels to train a deep learning segmentation model.
Both approaches yield similar Dice scores, although with very different false positive and false negative rates.
arXiv Detail & Related papers (2024-05-22T15:39:31Z) - C-DARL: Contrastive diffusion adversarial representation learning for
label-free blood vessel segmentation [39.79157116429435]
This paper presents a self-supervised vessel segmentation method, dubbed the contrastive diffusion adversarial representation learning (C-DARL) model.
Our model is composed of a diffusion module and a generation module that learns the distribution of multi-domain blood vessel data.
To validate the efficacy, C-DARL is trained using various vessel datasets, including coronary angiograms, abdominal digital subtraction angiograms, and retinal imaging.
arXiv Detail & Related papers (2023-07-31T23:09:01Z) - Partial Vessels Annotation-based Coronary Artery Segmentation with
Self-training and Prototype Learning [17.897934341782843]
We propose a partial vessels annotation (PVA) based on the challenges of coronary artery segmentation and clinical diagnostic characteristics.
Our proposed framework learns the local features of vessels to propagate the knowledge to unlabeled regions, and corrects the errors introduced in the propagation process.
Experiments on clinical data reveals that our proposed framework outperforms the competing methods under PVA (24.29% vessels)
arXiv Detail & Related papers (2023-07-10T10:42:48Z) - EmbedDistill: A Geometric Knowledge Distillation for Information
Retrieval [83.79667141681418]
Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR)
We propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model.
We show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.
arXiv Detail & Related papers (2023-01-27T22:04:37Z) - Diffusion Adversarial Representation Learning for Self-supervised Vessel
Segmentation [36.65094442100924]
Vessel segmentation in medical images is one of the important tasks in the diagnosis of vascular diseases and therapy planning.
We introduce a novel diffusion adversarial representation learning (DARL) model that leverages a denoising diffusion probabilistic model with adversarial learning.
Our method significantly outperforms existing unsupervised and self-supervised methods in vessel segmentation.
arXiv Detail & Related papers (2022-09-29T06:06:15Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - Learning Tubule-Sensitive CNNs for Pulmonary Airway and Artery-Vein
Segmentation in CT [45.93021999366973]
Training convolutional neural networks (CNNs) for segmentation of pulmonary airway, artery, and vein is challenging.
We present a CNNs-based method for accurate airway and artery-vein segmentation in non-contrast computed tomography.
It enjoys superior sensitivity to tenuous peripheral bronchioles, arterioles, and venules.
arXiv Detail & Related papers (2020-12-10T15:56:08Z) - Rethinking the Extraction and Interaction of Multi-Scale Features for
Vessel Segmentation [53.187152856583396]
We propose a novel deep learning model called PC-Net to segment retinal vessels and major arteries in 2D fundus image and 3D computed tomography angiography (CTA) scans.
In PC-Net, the pyramid squeeze-and-excitation (PSE) module introduces spatial information to each convolutional block, boosting its ability to extract more effective multi-scale features.
arXiv Detail & Related papers (2020-10-09T08:22:54Z) - Multi-Task Neural Networks with Spatial Activation for Retinal Vessel
Segmentation and Artery/Vein Classification [49.64863177155927]
We propose a multi-task deep neural network with spatial activation mechanism to segment full retinal vessel, artery and vein simultaneously.
The proposed network achieves pixel-wise accuracy of 95.70% for vessel segmentation, and A/V classification accuracy of 94.50%, which is the state-of-the-art performance for both tasks.
arXiv Detail & Related papers (2020-07-18T05:46:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.