Medical Image Segmentation Review: The success of U-Net
- URL: http://arxiv.org/abs/2211.14830v1
- Date: Sun, 27 Nov 2022 13:52:33 GMT
- Title: Medical Image Segmentation Review: The success of U-Net
- Authors: Reza Azad, Ehsan Khodapanah Aghdam, Amelie Rauland, Yiwei Jia, Atlas
Haddadi Avval, Afshin Bozorgpour, Sanaz Karimijafarbigloo, Joseph Paul Cohen,
Ehsan Adeli, Dorit Merhof
- Abstract summary: U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities.
Several extensions of this network have been proposed to address the scale and complexity created by medical tasks.
We discuss the practical aspects of the U-Net model and suggest a taxonomy to categorize each network variant.
- Score: 12.599426601722316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic medical image segmentation is a crucial topic in the medical domain
and successively a critical counterpart in the computer-aided diagnosis
paradigm. U-Net is the most widespread image segmentation architecture due to
its flexibility, optimized modular design, and success in all medical image
modalities. Over the years, the U-Net model achieved tremendous attention from
academic and industrial researchers. Several extensions of this network have
been proposed to address the scale and complexity created by medical tasks.
Addressing the deficiency of the naive U-Net model is the foremost step for
vendors to utilize the proper U-Net variant model for their business. Having a
compendium of different variants in one place makes it easier for builders to
identify the relevant research. Also, for ML researchers it will help them
understand the challenges of the biological tasks that challenge the model. To
address this, we discuss the practical aspects of the U-Net model and suggest a
taxonomy to categorize each network variant. Moreover, to measure the
performance of these strategies in a clinical application, we propose fair
evaluations of some unique and famous designs on well-known datasets. We
provide a comprehensive implementation library with trained models for future
research. In addition, for ease of future studies, we created an online list of
U-Net papers with their possible official implementation. All information is
gathered in https://github.com/NITR098/Awesome-U-Net repository.
Related papers
- Biomedical Foundation Model: A Survey [84.26268124754792]
Foundation models are large-scale pre-trained models that learn from extensive unlabeled datasets.
These models can be adapted to various applications such as question answering and visual understanding.
This survey explores the potential of foundation models across diverse domains within biomedical fields.
arXiv Detail & Related papers (2025-03-03T22:42:00Z) - A Comprehensive Review of U-Net and Its Variants: Advances and Applications in Medical Image Segmentation [0.0]
This paper classifies medical image datasets on the basis of their imaging modalities and examines U-Net and its various improvement models.
We summarize the four central improvement mechanisms of the U-Net and U-Net variant algorithms.
We propose potential avenues and strategies for future advancements.
arXiv Detail & Related papers (2025-02-09T13:11:51Z) - Efficient MedSAMs: Segment Anything in Medical Images on Laptop [69.28565867103542]
We organized the first international competition dedicated to promptable medical image segmentation.
The top teams developed lightweight segmentation foundation models and implemented an efficient inference pipeline.
The best-performing algorithms have been incorporated into the open-source software with a user-friendly interface to facilitate clinical adoption.
arXiv Detail & Related papers (2024-12-20T17:33:35Z) - Computation-Efficient Era: A Comprehensive Survey of State Space Models in Medical Image Analysis [8.115549269867403]
State Space Models (SSMs) have garnered immense interest lately in sequential modeling and visual representation learning.
Capitalizing on the advances in computer vision, medical imaging has heralded a new epoch with Mamba models.
arXiv Detail & Related papers (2024-06-05T16:29:03Z) - Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models
in Medicine [55.29668193415034]
We present OpenMEDLab, an open-source platform for multi-modality foundation models.
It encapsulates solutions of pioneering attempts in prompting and fine-tuning large language and vision models for frontline clinical and bioinformatic applications.
It opens access to a group of pre-trained foundation models for various medical image modalities, clinical text, protein engineering, etc.
arXiv Detail & Related papers (2024-02-28T03:51:02Z) - VISION-MAE: A Foundation Model for Medical Image Segmentation and
Classification [36.8105960525233]
We present a novel foundation model, VISION-MAE, specifically designed for medical imaging.
VISION-MAE is trained on a dataset of 2.5 million unlabeled images from various modalities.
It is then adapted to classification and segmentation tasks using explicit labels.
arXiv Detail & Related papers (2024-02-01T21:45:12Z) - Self-Prompting Large Vision Models for Few-Shot Medical Image
Segmentation [14.135249795318591]
We propose a novel perspective on self-prompting in medical vision applications.
We harness the embedding space of the Segment Anything Model to prompt itself through a simple yet effective linear pixel-wise classifier.
We achieve competitive results on multiple datasets.
arXiv Detail & Related papers (2023-08-15T08:20:07Z) - Ladder Fine-tuning approach for SAM integrating complementary network [5.46706034286531]
In medical imaging, the lack of training samples due to privacy concerns and other factors presents a major challenge for applying these generalized models to medical image segmentation task.
In this study, we propose to combine a complementary Convolutional Neural Network (CNN) along with the standard SAM network for medical image segmentation.
This strategy significantly reduces trainnig time and achieves competitive results on publicly available dataset.
arXiv Detail & Related papers (2023-06-22T08:36:17Z) - Artificial General Intelligence for Medical Imaging Analysis [92.3940918983821]
Large-scale Artificial General Intelligence (AGI) models have achieved unprecedented success in a variety of general domain tasks.
These models face notable challenges arising from the medical field's inherent complexities and unique characteristics.
This review aims to offer insights into the future implications of AGI in medical imaging, healthcare, and beyond.
arXiv Detail & Related papers (2023-06-08T18:04:13Z) - UniverSeg: Universal Medical Image Segmentation [16.19510845046103]
We present UniverSeg, a method for solving unseen medical segmentation tasks without additional training.
We have gathered and standardized a collection of 53 open-access medical segmentation datasets with over 22,000 scans.
We demonstrate that UniverSeg substantially outperforms several related methods on unseen tasks.
arXiv Detail & Related papers (2023-04-12T19:36:46Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.