OCTAve: 2D en face Optical Coherence Tomography Angiography Vessel
Segmentation in Weakly-Supervised Learning with Locality Augmentation
- URL: http://arxiv.org/abs/2207.12238v1
- Date: Mon, 25 Jul 2022 14:40:56 GMT
- Title: OCTAve: 2D en face Optical Coherence Tomography Angiography Vessel
Segmentation in Weakly-Supervised Learning with Locality Augmentation
- Authors: Amrest Chinkamol and Vetit Kanjaras and Phattarapong Sawangjai and
Yitian Zhao and Thapanun Sudhawiyangkul and Chantana Chantrapornchai and
Cuntai Guan and Theerawit Wilaiprasitporn
- Abstract summary: We propose the application of the scribble-base weakly-supervised learning method to automate the pixel-level annotation.
The proposed method, called OCTAve, combines the weakly-supervised learning using scribble-annotated ground truth augmented with an adversarial and a novel self-supervised deep supervision.
- Score: 14.322349196837209
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While there have been increased researches using deep learning techniques for
the extraction of vascular structure from the 2D en face OCTA, for such
approach, it is known that the data annotation process on the curvilinear
structure like the retinal vasculature is very costly and time consuming,
albeit few tried to address the annotation problem.
In this work, we propose the application of the scribble-base
weakly-supervised learning method to automate the pixel-level annotation. The
proposed method, called OCTAve, combines the weakly-supervised learning using
scribble-annotated ground truth augmented with an adversarial and a novel
self-supervised deep supervision. Our novel mechanism is designed to utilize
the discriminative outputs from the discrimination layer of a UNet-like
architecture where the Kullback-Liebler Divergence between the aggregate
discriminative outputs and the segmentation map predicate is minimized during
the training. This combined method leads to the better localization of the
vascular structure as shown in our experiments. We validate our proposed method
on the large public datasets i.e., ROSE, OCTA-500. The segmentation performance
is compared against both state-of-the-art fully-supervised and scribble-based
weakly-supervised approaches. The implementation of our work used in the
experiments is located at [LINK].
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - 2D Feature Distillation for Weakly- and Semi-Supervised 3D Semantic
Segmentation [92.17700318483745]
We propose an image-guidance network (IGNet) which builds upon the idea of distilling high level feature information from a domain adapted synthetically trained 2D semantic segmentation network.
IGNet achieves state-of-the-art results for weakly-supervised LiDAR semantic segmentation on ScribbleKITTI, boasting up to 98% relative performance to fully supervised training with only 8% labeled points.
arXiv Detail & Related papers (2023-11-27T07:57:29Z) - CV-Attention UNet: Attention-based UNet for 3D Cerebrovascular Segmentation of Enhanced TOF-MRA Images [2.2265536092123006]
We propose the 3D cerebrovascular attention UNet method, named CV-AttentionUNet, for precise extraction of brain vessel images.
To combine the low and high semantics, we applied the attention mechanism.
We believe that the novelty of this algorithm lies in its ability to perform well on both labeled and unlabeled data.
arXiv Detail & Related papers (2023-11-16T22:31:05Z) - Scribble-supervised Cell Segmentation Using Multiscale Contrastive
Regularization [9.849498498869258]
Scribble2Label (S2L) demonstrated that using only a handful of scribbles with self-supervised learning can generate accurate segmentation results without full annotation.
In this work, we employ a novel multiscale contrastive regularization term for S2L.
The main idea is to extract features from intermediate layers of the neural network for contrastive loss so that structures at various scales can be effectively separated.
arXiv Detail & Related papers (2023-06-25T06:00:33Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - IDEAL: Improved DEnse locAL Contrastive Learning for Semi-Supervised
Medical Image Segmentation [3.6748639131154315]
We extend the concept of metric learning to the segmentation task.
We propose a simple convolutional projection head for obtaining dense pixel-level features.
A bidirectional regularization mechanism involving two-stream regularization training is devised for the downstream task.
arXiv Detail & Related papers (2022-10-26T23:11:02Z) - Weakly Supervised Semantic Segmentation via Alternative Self-Dual
Teaching [82.71578668091914]
This paper establishes a compact learning framework that embeds the classification and mask-refinement components into a unified deep model.
We propose a novel alternative self-dual teaching (ASDT) mechanism to encourage high-quality knowledge interaction.
arXiv Detail & Related papers (2021-12-17T11:56:56Z) - Real-time landmark detection for precise endoscopic submucosal
dissection via shape-aware relation network [51.44506007844284]
We propose a shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection surgery.
We first devise an algorithm to automatically generate relation keypoint heatmaps, which intuitively represent the prior knowledge of spatial relations among landmarks.
We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process.
arXiv Detail & Related papers (2021-11-08T07:57:30Z) - Unsupervised Scale-consistent Depth Learning from Video [131.3074342883371]
We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training.
Thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system.
The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training.
arXiv Detail & Related papers (2021-05-25T02:17:56Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Semantics-Driven Unsupervised Learning for Monocular Depth and
Ego-Motion Estimation [33.83396613039467]
We propose a semantics-driven unsupervised learning approach for monocular depth and ego-motion estimation from videos.
Recent unsupervised learning methods employ photometric errors between synthetic view and actual image as a supervision signal for training.
arXiv Detail & Related papers (2020-06-08T05:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.