Lesion Segmentation and RECIST Diameter Prediction via Click-driven
Attention and Dual-path Connection
- URL: http://arxiv.org/abs/2105.01828v1
- Date: Wed, 5 May 2021 02:00:14 GMT
- Title: Lesion Segmentation and RECIST Diameter Prediction via Click-driven
Attention and Dual-path Connection
- Authors: Youbao Tang, Ke Yan, Jinzheng Cai, Lingyun Huang, Guotong Xie, Jing
Xiao, Jingjing Lu, Gigin Lin, and Le Lu
- Abstract summary: Measuring lesion size is an important step to assess tumor growth and monitor disease progression and therapy response.
We present a prior-guided dual-path network (PDNet) to segment common types of lesions throughout the whole body.
PDNet learns comprehensive and representative deep image features for our tasks and produces more accurate results.
- Score: 16.80758525711538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Measuring lesion size is an important step to assess tumor growth and monitor
disease progression and therapy response in oncology image analysis. Although
it is tedious and highly time-consuming, radiologists have to work on this task
by using RECIST criteria (Response Evaluation Criteria In Solid Tumors)
routinely and manually. Even though lesion segmentation may be the more
accurate and clinically more valuable means, physicians can not manually
segment lesions as now since much more heavy laboring will be required. In this
paper, we present a prior-guided dual-path network (PDNet) to segment common
types of lesions throughout the whole body and predict their RECIST diameters
accurately and automatically. Similar to [1], a click guidance from
radiologists is the only requirement. There are two key characteristics in
PDNet: 1) Learning lesion-specific attention matrices in parallel from the
click prior information by the proposed prior encoder, named click-driven
attention; 2) Aggregating the extracted multi-scale features comprehensively by
introducing top-down and bottom-up connections in the proposed decoder, named
dual-path connection. Experiments show the superiority of our proposed PDNet in
lesion segmentation and RECIST diameter prediction using the DeepLesion dataset
and an external test set. PDNet learns comprehensive and representative deep
image features for our tasks and produces more accurate results on both lesion
segmentation and RECIST diameter prediction.
Related papers
- An Attention Based Pipeline for Identifying Pre-Cancer Lesions in Head and Neck Clinical Images [1.0957311485487375]
Head and neck cancer is diagnosed in specialist centres after a surgical biopsy, but there is a potential for these to be missed leading to delayed diagnosis.
We present an attention based pipeline that identifies suspected lesions, segments, and classifies them as non-dysplastic, dysplastic and cancerous lesions.
arXiv Detail & Related papers (2024-05-03T09:02:17Z) - Real-time guidewire tracking and segmentation in intraoperative x-ray [52.51797358201872]
We propose a two-stage deep learning framework for real-time guidewire segmentation and tracking.
In the first stage, a Yolov5 detector is trained, using the original X-ray images as well as synthetic ones, to output the bounding boxes of possible target guidewires.
In the second stage, a novel and efficient network is proposed to segment the guidewire in each detected bounding box.
arXiv Detail & Related papers (2024-04-12T20:39:19Z) - Accurate and Robust Lesion RECIST Diameter Prediction and Segmentation
with Transformers [22.528235432455524]
This paper proposes a transformer-based network for lesion RECIST diameter prediction and segmentation (LRDPS)
It is formulated as three correlative and complementary tasks: lesion segmentation, heatmap prediction, and keypoint regression.
MeaFormer achieves the state-of-the-art performance of LRDPS on the large-scale DeepLesion dataset.
arXiv Detail & Related papers (2022-08-28T01:43:21Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and
Landmark Localization on 3D Intraoral Scans [56.55092443401416]
emphiMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at 0.953pm0.076$, significantly outperforming the original MeshSegNet.
PointNet-Reg achieved a mean absolute error (MAE) of $0.623pm0.718, mm$ in distances between the prediction and ground truth for $44$ landmarks, which is superior compared with other networks for landmark detection.
arXiv Detail & Related papers (2021-09-24T13:00:26Z) - RECIST-Net: Lesion detection via grouping keypoints on RECIST-based
annotation [37.006151248641125]
We propose RECIST-Net, a new approach to lesion detection in which the four extreme points and center point of the RECIST diameters are detected.
Experiments show that RECIST-Net achieves a sensitivity of 92.49% at four false positives per image.
arXiv Detail & Related papers (2021-07-19T09:41:13Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - One Click Lesion RECIST Measurement and Segmentation on CT Scans [16.93574675459732]
In clinical trials, one of the radiologists' routine work is to measure tumor sizes on medical images using the RECIST criteria.
We propose a unified framework named SEENet for semi-automatic lesion textitSEgmentation and RECIST textitEstimation.
arXiv Detail & Related papers (2020-07-21T20:53:43Z) - PraNet: Parallel Reverse Attention Network for Polyp Segmentation [155.93344756264824]
We propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images.
We first aggregate the features in high-level layers using a parallel partial decoder (PPD)
In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues.
arXiv Detail & Related papers (2020-06-13T08:13:43Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z) - Weakly-Supervised Lesion Segmentation on CT Scans using Co-Segmentation [18.58056402884405]
Lesion segmentation on computed tomography (CT) scans is an important step for precisely monitoring changes in lesion/tumor growth.
Current practices rely on an imprecise substitute called response evaluation criteria in solid tumors.
This paper proposes a convolutional neural network based weakly-supervised lesion segmentation method.
arXiv Detail & Related papers (2020-01-23T15:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.