FS-Net: Full Scale Network and Adaptive Threshold for Improving Extraction of Micro-Retinal Vessel Structures
- URL: http://arxiv.org/abs/2311.08059v4
- Date: Fri, 03 Jan 2025 16:30:43 GMT
- Title: FS-Net: Full Scale Network and Adaptive Threshold for Improving Extraction of Micro-Retinal Vessel Structures
- Authors: Melaku N. Getahun, Oleg Y. Rogov, Dmitry V. Dylov, Andrey Somov, Ahmed Bouridane, Rifat Hamoudi,
- Abstract summary: Segmenting retinal vessels presents unique challenges.
Recent neural network approaches struggle to balance local and global properties.
We propose a comprehensive micro-vessel extraction mechanism based on an encoder-decoder neural network architecture.
- Score: 4.507779218329283
- License:
- Abstract: Retinal vascular segmentation, a widely researched topic in biomedical image processing, aims to reduce the workload of ophthalmologists in treating and detecting retinal disorders. Segmenting retinal vessels presents unique challenges; previous techniques often failed to effectively segment branches and microvascular structures. Recent neural network approaches struggle to balance local and global properties and frequently miss tiny end vessels, hindering the achievement of desired results. To address these issues in retinal vessel segmentation, we propose a comprehensive micro-vessel extraction mechanism based on an encoder-decoder neural network architecture. This network includes residual, encoder booster, bottleneck enhancement, squeeze, and excitation building blocks. These components synergistically enhance feature extraction and improve the prediction accuracy of the segmentation map. Our solution has been evaluated using the DRIVE, CHASE-DB1, and STARE datasets, yielding competitive results compared to previous studies. The AUC and accuracy on the DRIVE dataset are 0.9884 and 0.9702, respectively. For the CHASE-DB1 dataset, these scores are 0.9903 and 0.9755, respectively, and for the STARE dataset, they are 0.9916 and 0.9750. Given its accurate and robust performance, the proposed approach is a solid candidate for being implemented in real-life diagnostic centers and aiding ophthalmologists.
Related papers
- SMILE-UHURA Challenge -- Small Vessel Segmentation at Mesoscopic Scale from Ultra-High Resolution 7T Magnetic Resonance Angiograms [60.35639972035727]
The lack of publicly available annotated datasets has impeded the development of robust, machine learning-driven segmentation algorithms.
The SMILE-UHURA challenge addresses the gap in publicly available annotated datasets by providing an annotated dataset of Time-of-Flight angiography acquired with 7T MRI.
Dice scores reached up to 0.838 $pm$ 0.066 and 0.716 $pm$ 0.125 on the respective datasets, with an average performance of up to 0.804 $pm$ 0.15.
arXiv Detail & Related papers (2024-11-14T17:06:00Z) - MDFI-Net: Multiscale Differential Feature Interaction Network for Accurate Retinal Vessel Segmentation [3.152646316470194]
This paper proposes a feature-enhanced interaction network based on DPCN, named MDFI-Net.
The proposed MDFI-Net achieves segmentation performance superior to state-of-the-art methods on public datasets.
arXiv Detail & Related papers (2024-10-20T16:42:22Z) - Region Guided Attention Network for Retinal Vessel Segmentation [19.587662416331682]
We present a lightweight retinal vessel segmentation network based on the encoder-decoder mechanism with region-guided attention.
Dice loss penalises false positives and false negatives equally, encouraging the model to generate more accurate segmentation.
Experiments on a benchmark dataset show better performance (0.8285, 0.8098, 0.9677, and 0.8166 recall, precision, accuracy and F1 score respectively) compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-07-22T00:08:18Z) - GAEI-UNet: Global Attention and Elastic Interaction U-Net for Vessel
Image Segmentation [0.0]
Vessel image segmentation plays a pivotal role in medical diagnostics, aiding in the early detection and treatment of vascular diseases.
We propose GAEI-UNet, a novel model that combines global attention and elastic interaction-based techniques.
By capturing the forces generated by misalignment between target and predicted shapes, our model effectively learns to preserve the correct topology of vessel networks.
arXiv Detail & Related papers (2023-08-16T13:10:32Z) - MAF-Net: Multiple attention-guided fusion network for fundus vascular
image segmentation [1.3295074739915493]
We propose a multiple attention-guided fusion network (MAF-Net) to accurately detect blood vessels in retinal fundus images.
Traditional UNet-based models may lose partial information due to explicitly modeling long-distance dependencies.
We show that our method produces satisfactory results compared to some state-of-the-art methods.
arXiv Detail & Related papers (2023-05-05T15:22:20Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Fuzzy Attention Neural Network to Tackle Discontinuity in Airway
Segmentation [67.19443246236048]
Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases.
Some small-sized airway branches (e.g., bronchus and terminaloles) significantly aggravate the difficulty of automatic segmentation.
This paper presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network and a comprehensive loss function.
arXiv Detail & Related papers (2022-09-05T16:38:13Z) - Transfer Learning Through Weighted Loss Function and Group Normalization
for Vessel Segmentation from Retinal Images [0.0]
The vascular structure of blood vessels is important in diagnosing retinal conditions such as glaucoma and diabetic retinopathy.
We propose an approach for segmenting retinal vessels that uses deep learning along with transfer learning.
Our approach results in greater segmentation accuracy than other approaches.
arXiv Detail & Related papers (2020-12-16T20:34:48Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Multi-Task Neural Networks with Spatial Activation for Retinal Vessel
Segmentation and Artery/Vein Classification [49.64863177155927]
We propose a multi-task deep neural network with spatial activation mechanism to segment full retinal vessel, artery and vein simultaneously.
The proposed network achieves pixel-wise accuracy of 95.70% for vessel segmentation, and A/V classification accuracy of 94.50%, which is the state-of-the-art performance for both tasks.
arXiv Detail & Related papers (2020-07-18T05:46:47Z) - Collaborative Boundary-aware Context Encoding Networks for Error Map
Prediction [65.44752447868626]
We propose collaborative boundaryaware context encoding networks called AEP-Net for error prediction task.
Specifically, we propose a collaborative feature transformation branch for better feature fusion between images and masks, and precise localization of error regions.
The AEP-Net achieves an average DSC of 0.8358, 0.8164 for error prediction task, and shows a high Pearson correlation coefficient of 0.9873.
arXiv Detail & Related papers (2020-06-25T12:42:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.