Domain-stratified Training for Cross-organ and Cross-scanner Adenocarcinoma Segmentation in the COSAS 2024 Challenge
- URL: http://arxiv.org/abs/2409.12418v1
- Date: Thu, 19 Sep 2024 02:36:34 GMT
- Title: Domain-stratified Training for Cross-organ and Cross-scanner Adenocarcinoma Segmentation in the COSAS 2024 Challenge
- Authors: Huang Jiayan, Ji Zheng, Kuang Jinbo, Xu Shuoyu,
- Abstract summary: This manuscript presents an image segmentation algorithm developed for the Cross-Organ and Cross-Scanner Adenocarcinoma (COSAS 2024) challenge.
We adopted an organ-stratified and scanner-stratified approach to train multiple Upernet-based segmentation models and ensembled the results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This manuscript presents an image segmentation algorithm developed for the Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation (COSAS 2024) challenge. We adopted an organ-stratified and scanner-stratified approach to train multiple Upernet-based segmentation models and subsequently ensembled the results. Despite the challenges posed by the varying tumor characteristics across different organs and the differing imaging conditions of various scanners, our method achieved a final test score of 0.7643 for Task 1 and 0.8354 for Task 2. These results demonstrate the adaptability and efficacy of our approach across diverse conditions. Our model's ability to generalize across various datasets underscores its potential for real-world applications.
Related papers
- Adenocarcinoma Segmentation Using Pre-trained Swin-UNet with Parallel Cross-Attention for Multi-Domain Imaging [0.2878844332549157]
We present a framework consist of pre-trained encoder with a Swin-UNet architecture enhanced by a parallel cross-attention module to tackle the problem of adenocarcinoma segmentation across different organs and scanners.
Experiment showed that our framework achieved segmentation scores of 0.7469 for the cross-organ track and 0.7597 for the cross-scanner track.
arXiv Detail & Related papers (2024-09-23T19:38:43Z) - Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models [0.0]
We use Rein to fine-tune various vision foundation models (VFMs) for MICCAI 2024 Cross-Organ and Cross-Scanner Adenocarcinoma.
In the data environment of the COSAS2024 Challenge, extensive experiments demonstrate that Rein fine-tuned the VFMs to achieve satisfactory results.
arXiv Detail & Related papers (2024-09-18T07:10:24Z) - Domain and Content Adaptive Convolutions for Cross-Domain Adenocarcinoma Segmentation [0.44569778071604066]
We present a U-Net-based segmentation framework to tackle the Cross-Organ and Cross-Scanner Adenocarcinoma (COSAS) challenge.
Our approach achieved segmentation scores of 0.8020 for the cross-organ track and 0.8527 for the cross-scanner track on the final challenge test sets, ranking it the best-performing submission.
arXiv Detail & Related papers (2024-09-15T17:08:34Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Mitosis domain generalization in histopathology images -- The MIDOG
challenge [12.69088811541426]
Recognition of mitotic figures by pathologists is subject to a strong inter-rater bias, which limits the prognostic value.
State-of-the-art deep learning methods can support the expert in this assessment but are known to strongly deteriorate when applied in a different clinical environment than was used for training.
The MICCAI MIDOG 2021 challenge has been to propose and evaluate methods that derive scanner-agnostic mitosis detection algorithms.
arXiv Detail & Related papers (2022-04-06T11:43:10Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - AGE Challenge: Angle Closure Glaucoma Evaluation in Anterior Segment
Optical Coherence Tomography [61.405005501608706]
Angle closure glaucoma (ACG) is a more aggressive disease than open-angle glaucoma.
Anterior Segment Optical Coherence Tomography (AS- OCT) imaging provides a fast and contactless way to discriminate angle closure from open angle.
There is no public AS- OCT dataset available for evaluating the existing methods in a uniform way.
We organized the Angle closure Glaucoma Evaluation challenge (AGE), held in conjunction with MICCAI 2019.
arXiv Detail & Related papers (2020-05-05T14:55:01Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.