FMG-Net and W-Net: Multigrid Inspired Deep Learning Architectures For
Medical Imaging Segmentation
- URL: http://arxiv.org/abs/2304.02725v3
- Date: Fri, 10 Nov 2023 21:13:09 GMT
- Title: FMG-Net and W-Net: Multigrid Inspired Deep Learning Architectures For
Medical Imaging Segmentation
- Authors: Adrian Celaya, Beatrice Riviere, David Fuentes
- Abstract summary: We propose two architectures that incorporate the principles of geometric multigrid methods for solving linear systems of equations into CNNs.
We show that both -Net and W-Net outperform the widely used U-Net architecture regarding tumor subcomponent segmentation accuracy and training efficiency.
These findings highlight the potential of the principles of multigrid methods into CNNs to improve the accuracy and efficiency of medical imaging segmentation.
- Score: 1.3812010983144802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate medical imaging segmentation is critical for precise and effective
medical interventions. However, despite the success of convolutional neural
networks (CNNs) in medical image segmentation, they still face challenges in
handling fine-scale features and variations in image scales. These challenges
are particularly evident in complex and challenging segmentation tasks, such as
the BraTS multi-label brain tumor segmentation challenge. In this task,
accurately segmenting the various tumor sub-components, which vary
significantly in size and shape, remains a significant challenge, with even
state-of-the-art methods producing substantial errors. Therefore, we propose
two architectures, FMG-Net and W-Net, that incorporate the principles of
geometric multigrid methods for solving linear systems of equations into CNNs
to address these challenges. Our experiments on the BraTS 2020 dataset
demonstrate that both FMG-Net and W-Net outperform the widely used U-Net
architecture regarding tumor subcomponent segmentation accuracy and training
efficiency. These findings highlight the potential of incorporating the
principles of multigrid methods into CNNs to improve the accuracy and
efficiency of medical imaging segmentation.
Related papers
- TransResNet: Integrating the Strengths of ViTs and CNNs for High Resolution Medical Image Segmentation via Feature Grafting [6.987177704136503]
High-resolution images are preferable in medical imaging domain as they significantly improve the diagnostic capability of the underlying method.
Most of the existing deep learning-based techniques for medical image segmentation are optimized for input images having small spatial dimensions and perform poorly on high-resolution images.
We propose a parallel-in-branch architecture called TransResNet, which incorporates Transformer and CNN in a parallel manner to extract features from multi-resolution images independently.
arXiv Detail & Related papers (2024-10-01T18:22:34Z) - Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention [1.1155836879100416]
We propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation.
MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features.
E-SDM mitigates information loss in multi-task learning with deep supervision.
arXiv Detail & Related papers (2024-05-10T07:34:36Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - BRAU-Net++: U-Shaped Hybrid CNN-Transformer Network for Medical Image Segmentation [11.986549780782724]
We propose a hybrid yet effective CNN-Transformer network, named BRAU-Net++, for an accurate medical image segmentation task.
Specifically, BRAU-Net++ uses bi-level routing attention as the core building block to design our u-shaped encoder-decoder structure.
Our proposed approach surpasses other state-of-the-art methods including its baseline: BRAU-Net.
arXiv Detail & Related papers (2024-01-01T10:49:09Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - HistoSeg : Quick attention with multi-loss function for multi-structure
segmentation in digital histology images [0.696194614504832]
Medical image segmentation assists in computer-aided diagnosis, surgeries, and treatment.
We proposed an generalization-Decoder Network, Quick Attention Module and a Multi Loss Function.
We evaluate the capability of our proposed network on two publicly available datasets for medical image segmentation MoNuSeg and GlaS.
arXiv Detail & Related papers (2022-09-01T21:10:00Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.