Scalable Segmentation for Ultra-High-Resolution Brain MR Images
- URL: http://arxiv.org/abs/2505.21697v1
- Date: Tue, 27 May 2025 19:34:55 GMT
- Title: Scalable Segmentation for Ultra-High-Resolution Brain MR Images
- Authors: Xiaoling Hu, Peirong Liu, Dina Zemlyanker, Jonathan Williams Ramirez, Oula Puonti, Juan Eugenio Iglesias,
- Abstract summary: We propose a novel framework that leverages easily accessible, low-resolution coarse labels as spatial references and guidance.<n>Our approach regresses per-class signed distance transform maps, enabling smooth, boundary-aware supervision.<n>We validate our method through comprehensive experiments on both synthetic and real-world datasets.
- Score: 9.295998760042169
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Although deep learning has shown great success in 3D brain MRI segmentation, achieving accurate and efficient segmentation of ultra-high-resolution brain images remains challenging due to the lack of labeled training data for fine-scale anatomical structures and high computational demands. In this work, we propose a novel framework that leverages easily accessible, low-resolution coarse labels as spatial references and guidance, without incurring additional annotation cost. Instead of directly predicting discrete segmentation maps, our approach regresses per-class signed distance transform maps, enabling smooth, boundary-aware supervision. Furthermore, to enhance scalability, generalizability, and efficiency, we introduce a scalable class-conditional segmentation strategy, where the model learns to segment one class at a time conditioned on a class-specific input. This novel design not only reduces memory consumption during both training and testing, but also allows the model to generalize to unseen anatomical classes. We validate our method through comprehensive experiments on both synthetic and real-world datasets, demonstrating its superior performance and scalability compared to conventional segmentation approaches.
Related papers
- Enhancing SAM with Efficient Prompting and Preference Optimization for Semi-supervised Medical Image Segmentation [30.524999223901645]
We propose an enhanced Segment Anything Model (SAM) framework that utilizes annotation-efficient prompts generated in a fully unsupervised fashion.<n>We adopt the direct preference optimization technique to design an optimal policy that enables the model to generate high-fidelity segmentations.<n>State-of-the-art performance of our framework in tasks such as lung segmentation, breast tumor segmentation, and organ segmentation across various modalities, including X-ray, ultrasound, and abdominal CT, justifies its effectiveness in low-annotation data scenarios.
arXiv Detail & Related papers (2025-03-06T17:28:48Z) - Self-adaptive vision-language model for 3D segmentation of pulmonary artery and vein [18.696258519327095]
This paper proposes a novel framework called Language-guided self-adaptive Cross-Attention Fusion Framework.<n>Our method adopts pre-trained CLIP as a strong feature extractor for generating the segmentation of 3D CT scans.<n>We extensively validate our method on a local dataset, which is the largest pulmonary artery-vein CT dataset to date.
arXiv Detail & Related papers (2025-01-07T12:03:02Z) - Optimizing against Infeasible Inclusions from Data for Semantic Segmentation through Morphology [58.17907376475596]
State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion.<n>InSeIn extracts explicit inclusion constraints that govern spatial class relations from the semantic segmentation training set at hand.<n>It then enforces a morphological yet differentiable loss that penalizes violations of these constraints during training to promote prediction feasibility.
arXiv Detail & Related papers (2024-08-26T22:39:08Z) - SwIPE: Efficient and Robust Medical Image Segmentation with Implicit Patch Embeddings [12.79344668998054]
We propose SwIPE (Segmentation with Implicit Patch Embeddings) to enable accurate local boundary delineation and global shape coherence.
We show that SwIPE significantly improves over recent implicit approaches and outperforms state-of-the-art discrete methods with over 10x fewer parameters.
arXiv Detail & Related papers (2023-07-23T20:55:11Z) - SegPrompt: Using Segmentation Map as a Better Prompt to Finetune Deep
Models for Kidney Stone Classification [62.403510793388705]
Deep learning has produced encouraging results for kidney stone classification using endoscope images.
The shortage of annotated training data poses a severe problem in improving the performance and generalization ability of the trained model.
We propose SegPrompt to alleviate the data shortage problems by exploiting segmentation maps from two aspects.
arXiv Detail & Related papers (2023-03-15T01:30:48Z) - SlimSeg: Slimmable Semantic Segmentation with Boundary Supervision [54.16430358203348]
We propose a simple but effective slimmable semantic segmentation (SlimSeg) method, which can be executed at different capacities during inference.
We show that our proposed SlimSeg with various mainstream networks can produce flexible models that provide dynamic adjustment of computational cost and better performance.
arXiv Detail & Related papers (2022-07-13T14:41:05Z) - Towards to Robust and Generalized Medical Image Segmentation Framework [17.24628770042803]
We propose a novel two-stage framework for robust generalized segmentation.
In particular, an unsupervised Tile-wise AutoEncoder (T-AE) pretraining architecture is coined to learn meaningful representation.
Experiments of lung segmentation on multi chest X-ray datasets are conducted.
arXiv Detail & Related papers (2021-08-09T05:58:49Z) - Flip Learning: Erase to Segment [65.84901344260277]
Weakly-supervised segmentation (WSS) can help reduce time-consuming and cumbersome manual annotation.
We propose a novel and general WSS framework called Flip Learning, which only needs the box annotation.
Our proposed approach achieves competitive performance and shows great potential to narrow the gap between fully-supervised and weakly-supervised learning.
arXiv Detail & Related papers (2021-08-02T09:56:10Z) - Weakly Supervised Volumetric Segmentation via Self-taught Shape
Denoising Model [27.013224147257198]
We propose a novel weakly-supervised segmentation strategy capable of better capturing 3D shape prior in both model prediction and learning.
Our main idea is to extract a self-taught shape representation by leveraging weak labels, and then integrate this representation into segmentation prediction for shape refinement.
arXiv Detail & Related papers (2021-04-27T10:03:45Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.