SSTFB: Leveraging self-supervised pretext learning and temporal self-attention with feature branching for real-time video polyp segmentation
- URL: http://arxiv.org/abs/2406.10200v1
- Date: Fri, 14 Jun 2024 17:33:11 GMT
- Title: SSTFB: Leveraging self-supervised pretext learning and temporal self-attention with feature branching for real-time video polyp segmentation
- Authors: Ziang Xu, Jens Rittscher, Sharib Ali,
- Abstract summary: We propose a video polyp segmentation method that performs self-supervised learning as an auxiliary task and a spatial-temporal self-attention mechanism for improved representation learning.
Our experimental results demonstrate an improvement with respect to several state-of-the-art (SOTA) methods.
Our ablation study confirms that the choice of the proposed joint end-to-end training improves network accuracy by over 3% and nearly 10% on both the Dice similarity coefficient and intersection-over-union.
- Score: 4.027361638728112
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Polyps are early cancer indicators, so assessing occurrences of polyps and their removal is critical. They are observed through a colonoscopy screening procedure that generates a stream of video frames. Segmenting polyps in their natural video screening procedure has several challenges, such as the co-existence of imaging artefacts, motion blur, and floating debris. Most existing polyp segmentation algorithms are developed on curated still image datasets that do not represent real-world colonoscopy. Their performance often degrades on video data. We propose a video polyp segmentation method that performs self-supervised learning as an auxiliary task and a spatial-temporal self-attention mechanism for improved representation learning. Our end-to-end configuration and joint optimisation of losses enable the network to learn more discriminative contextual features in videos. Our experimental results demonstrate an improvement with respect to several state-of-the-art (SOTA) methods. Our ablation study also confirms that the choice of the proposed joint end-to-end training improves network accuracy by over 3% and nearly 10% on both the Dice similarity coefficient and intersection-over-union compared to the recently proposed method PNS+ and Polyp-PVT, respectively. Results on previously unseen video data indicate that the proposed method generalises.
Related papers
- ASPS: Augmented Segment Anything Model for Polyp Segmentation [77.25557224490075]
The Segment Anything Model (SAM) has introduced unprecedented potential for polyp segmentation.
SAM's Transformer-based structure prioritizes global and low-frequency information.
CFA integrates a trainable CNN encoder branch with a frozen ViT encoder, enabling the integration of domain-specific knowledge.
arXiv Detail & Related papers (2024-06-30T14:55:32Z) - BetterNet: An Efficient CNN Architecture with Residual Learning and Attention for Precision Polyp Segmentation [0.6062751776009752]
This research presents BetterNet, a convolutional neural network architecture that combines residual learning and attention methods to enhance the accuracy of polyp segmentation.
BetterNet shows promise in integrating computer-assisted diagnosis techniques to enhance the detection of polyps and the early recognition of cancer.
arXiv Detail & Related papers (2024-05-05T21:08:49Z) - RetSeg: Retention-based Colorectal Polyps Segmentation Network [0.0]
Vision Transformers (ViTs) have revolutionized medical imaging analysis.
ViTs exhibit contextual awareness in processing visual data, culminating in robust and precise predictions.
We introduce RetSeg, an encoder-decoder network featuring multi-head retention blocks.
arXiv Detail & Related papers (2023-10-09T06:43:38Z) - YONA: You Only Need One Adjacent Reference-frame for Accurate and Fast
Video Polyp Detection [80.68520401539979]
textbfYONA (textbfYou textbfOnly textbfNeed one textbfAdjacent Reference-frame) is an efficient end-to-end training framework for video polyp detection.
Our proposed YONA outperforms previous state-of-the-art competitors by a large margin in both accuracy and speed.
arXiv Detail & Related papers (2023-06-06T13:53:15Z) - Accurate Real-time Polyp Detection in Videos from Concatenation of
Latent Features Extracted from Consecutive Frames [5.2009074009536524]
Convolutional neural networks (CNNs) are vulnerable to small changes in the input image.
A CNN-based model may miss the same polyp appearing in a series of consecutive frames.
We propose an efficient feature concatenation method for a CNN-based encoder-decoder model.
arXiv Detail & Related papers (2023-03-10T11:51:22Z) - Lesion-aware Dynamic Kernel for Polyp Segmentation [49.63274623103663]
We propose a lesion-aware dynamic network (LDNet) for polyp segmentation.
It is a traditional u-shape encoder-decoder structure incorporated with a dynamic kernel generation and updating scheme.
This simple but effective scheme endows our model with powerful segmentation performance and generalization capability.
arXiv Detail & Related papers (2023-01-12T09:53:57Z) - Contrastive Transformer-based Multiple Instance Learning for Weakly
Supervised Polyp Frame Detection [30.51410140271929]
Current polyp detection methods from colonoscopy videos use exclusively normal (i.e., healthy) training images.
We formulate polyp detection as a weakly-supervised anomaly detection task that uses video-level labelled training data to detect frame-level polyps.
arXiv Detail & Related papers (2022-03-23T01:30:48Z) - Self-Supervised U-Net for Segmenting Flat and Sessile Polyps [63.62764375279861]
Development of colorectal polyps is one of the earliest signs of cancer.
Early detection and resection of polyps can greatly increase survival rate to 90%.
Computer-Aided Diagnosis systems(CADx) has been proposed that detect polyps by processing the colonoscopic videos.
arXiv Detail & Related papers (2021-10-17T09:31:20Z) - Automatic Polyp Segmentation via Multi-scale Subtraction Network [100.94922587360871]
In clinical practice, precise polyp segmentation provides important information in the early detection of colorectal cancer.
Most existing methods are based on U-shape structure and use element-wise addition or concatenation to fuse different level features progressively in decoder.
We propose a multi-scale subtraction network (MSNet) to segment polyp from colonoscopy image.
arXiv Detail & Related papers (2021-08-11T07:54:07Z) - Colonoscopy Polyp Detection: Domain Adaptation From Medical Report
Images to Real-time Videos [76.37907640271806]
We propose an Image-video-joint polyp detection network (Ivy-Net) to address the domain gap between colonoscopy images from historical medical reports and real-time videos.
Experiments on the collected dataset demonstrate that our Ivy-Net achieves the state-of-the-art result on colonoscopy video.
arXiv Detail & Related papers (2020-12-31T10:33:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.