Shifting More Attention to Breast Lesion Segmentation in Ultrasound
Videos
- URL: http://arxiv.org/abs/2310.01861v1
- Date: Tue, 3 Oct 2023 07:50:32 GMT
- Title: Shifting More Attention to Breast Lesion Segmentation in Ultrasound
Videos
- Authors: Junhao Lin, Qian Dai, Lei Zhu, Huazhu Fu, Qiong Wang, Weibin Li,
Wenhao Rao, Xiaoyang Huang, Liansheng Wang
- Abstract summary: We meticulously curated a US video breast lesion segmentation dataset comprising 572 videos and 34,300 annotated frames.
We propose a novel frequency and localization feature aggregation network (FLA-Net) that learns temporal features from the frequency domain.
Our experiments on our annotated dataset and two public video polyp segmentation datasets demonstrate that our proposed FLA-Net achieves state-of-the-art performance.
- Score: 43.454994341021276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Breast lesion segmentation in ultrasound (US) videos is essential for
diagnosing and treating axillary lymph node metastasis. However, the lack of a
well-established and large-scale ultrasound video dataset with high-quality
annotations has posed a persistent challenge for the research community. To
overcome this issue, we meticulously curated a US video breast lesion
segmentation dataset comprising 572 videos and 34,300 annotated frames,
covering a wide range of realistic clinical scenarios. Furthermore, we propose
a novel frequency and localization feature aggregation network (FLA-Net) that
learns temporal features from the frequency domain and predicts additional
lesion location positions to assist with breast lesion segmentation. We also
devise a localization-based contrastive loss to reduce the lesion location
distance between neighboring video frames within the same video and enlarge the
location distances between frames from different ultrasound videos. Our
experiments on our annotated dataset and two public video polyp segmentation
datasets demonstrate that our proposed FLA-Net achieves state-of-the-art
performance in breast lesion segmentation in US videos and video polyp
segmentation while significantly reducing time and space complexity. Our model
and dataset are available at https://github.com/jhl-Det/FLA-Net.
Related papers
- LGRNet: Local-Global Reciprocal Network for Uterine Fibroid Segmentation in Ultrasound Videos [19.661094457941417]
Regular screening and early discovery of uterine fibroid are crucial for preventing potential malignant transformations.
We present Local-Global Reciprocal Network (LGRNet) to efficiently and effectively propagate the long-term temporal context.
arXiv Detail & Related papers (2024-07-08T08:06:06Z) - A Spatial-Temporal Progressive Fusion Network for Breast Lesion Segmentation in Ultrasound Videos [7.0117363464728815]
The main challenge for ultrasound video-based breast lesion segmentation is how to exploit the lesion cues of both intraframe and inter-frame simultaneously.
We propose a novel Spatial Progressive Fusion Network (STPFNet) for video based breast lesion segmentation problem.
STPFNet achieves better breast lesion detection performance than state-of-the-art methods.
arXiv Detail & Related papers (2024-03-18T11:56:32Z) - Is Two-shot All You Need? A Label-efficient Approach for Video
Segmentation in Breast Ultrasound [4.113689581316844]
We propose a novel two-shot training paradigm for BUS video segmentation.
It not only is able to capture free-range space-time consistency but also utilizes a source-dependent augmentation scheme.
Results showed that it gained comparable performance to the fully annotated ones given only 1.9% training labels.
arXiv Detail & Related papers (2024-02-07T14:47:08Z) - Vivim: a Video Vision Mamba for Medical Video Segmentation [52.11785024350253]
This paper presents a Video Vision Mamba-based framework, dubbed as Vivim, for medical video segmentation tasks.
Our Vivim can effectively compress the long-term representation into sequences at varying scales.
Experiments on thyroid segmentation, breast lesion segmentation in ultrasound videos, and polyp segmentation in colonoscopy videos demonstrate the effectiveness and efficiency of our Vivim.
arXiv Detail & Related papers (2024-01-25T13:27:03Z) - A Spatial-Temporal Deformable Attention based Framework for Breast
Lesion Detection in Videos [107.96514633713034]
We propose a spatial-temporal deformable attention based framework, named STNet.
Our STNet introduces a spatial-temporal deformable attention module to perform local spatial-temporal feature fusion.
Experiments on the public breast lesion ultrasound video dataset show that our STNet obtains a state-of-the-art detection performance.
arXiv Detail & Related papers (2023-09-09T07:00:10Z) - A New Dataset and A Baseline Model for Breast Lesion Detection in
Ultrasound Videos [43.42513012531214]
We first collect and annotate an ultrasound video dataset (188 videos) for breast lesion detection.
We propose a clip-level and video-level feature aggregated network (CVA-Net) for addressing breast lesion detection in ultrasound videos.
arXiv Detail & Related papers (2022-07-01T01:37:50Z) - Global Guidance Network for Breast Lesion Segmentation in Ultrasound
Images [84.03487786163781]
We develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection modules.
Our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
arXiv Detail & Related papers (2021-04-05T13:15:22Z) - Coherent Loss: A Generic Framework for Stable Video Segmentation [103.78087255807482]
We investigate how a jittering artifact degrades the visual quality of video segmentation results.
We propose a Coherent Loss with a generic framework to enhance the performance of a neural network against jittering artifacts.
arXiv Detail & Related papers (2020-10-25T10:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.