Divide and Conquer: Grounding a Bleeding Areas in Gastrointestinal Image with Two-Stage Model
- URL: http://arxiv.org/abs/2412.16723v1
- Date: Sat, 21 Dec 2024 18:18:12 GMT
- Title: Divide and Conquer: Grounding a Bleeding Areas in Gastrointestinal Image with Two-Stage Model
- Authors: Yu-Fan Lin, Bo-Cheng Qiu, Chia-Ming Lee, Chih-Chung Hsu,
- Abstract summary: This study proposes a two-stage framework that decouples classification and grounding to address the inherent challenges posed by traditional Multi-Task Learning models.
Experimental results demonstrate significant improvements in classification accuracy and segmentation precision.
- Score: 7.1083241462091165
- License:
- Abstract: Accurate detection and segmentation of gastrointestinal bleeding are critical for diagnosing diseases such as peptic ulcers and colorectal cancer. This study proposes a two-stage framework that decouples classification and grounding to address the inherent challenges posed by traditional Multi-Task Learning models, which jointly optimizes classification and segmentation. Our approach separates these tasks to achieve targeted optimization for each. The model first classifies images as bleeding or non-bleeding, thereby isolating subsequent grounding from inter-task interference and label heterogeneity. To further enhance performance, we incorporate Stochastic Weight Averaging and Test-Time Augmentation, which improve model robustness against domain shifts and annotation inconsistencies. Our method is validated on the Auto-WCEBleedGen Challenge V2 Challenge dataset and achieving second place. Experimental results demonstrate significant improvements in classification accuracy and segmentation precision, especially on sequential datasets with consistent visual patterns. This study highlights the practical benefits of a two-stage strategy for medical image analysis and sets a new standard for GI bleeding detection and segmentation. Our code is publicly available at this GitHub repository.
Related papers
- Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Cross-supervised Dual Classifiers for Semi-supervised Medical Image
Segmentation [10.18427897663732]
Semi-supervised medical image segmentation offers a promising solution for large-scale medical image analysis.
This paper proposes a cross-supervised learning framework based on dual classifiers (DC-Net)
Experiments on LA and Pancreas-CT dataset illustrate that DC-Net outperforms other state-of-the-art methods for semi-supervised segmentation.
arXiv Detail & Related papers (2023-05-25T16:23:39Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.