DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust
Universal Lesion Detection
- URL: http://arxiv.org/abs/2203.06886v1
- Date: Mon, 14 Mar 2022 06:54:28 GMT
- Title: DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust
Universal Lesion Detection
- Authors: Manu Sheoran, Meghal Dani, Monika Sharma, Lovekesh Vig
- Abstract summary: We propose a robust universal lesion detection (ULD) network that can detect lesions across all organs of the body by training on a single dataset, DeepLesion.
We analyze CT-slices of varying intensities, generated using a novel convolution augmented multi-head self-attention module.
We evaluate the efficacy of our network on the publicly available DeepLesion dataset which comprises of approximately 32K CT scans with annotated lesions across all organs of the body.
- Score: 19.165942326142538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incorporating data-specific domain knowledge in deep networks explicitly can
provide important cues beneficial for lesion detection and can mitigate the
need for diverse heterogeneous datasets for learning robust detectors. In this
paper, we exploit the domain information present in computed tomography (CT)
scans and propose a robust universal lesion detection (ULD) network that can
detect lesions across all organs of the body by training on a single dataset,
DeepLesion. We analyze CT-slices of varying intensities, generated using
heuristically determined Hounsfield Unit(HU) windows that individually
highlight different organs and are given as inputs to the deep network. The
features obtained from the multiple intensity images are fused using a novel
convolution augmented multi-head self-attention module and subsequently, passed
to a Region Proposal Network (RPN) for lesion detection. In addition, we
observed that traditional anchor boxes used in RPN for natural images are not
suitable for lesion sizes often found in medical images. Therefore, we propose
to use lesion-specific anchor sizes and ratios in the RPN for improving the
detection performance. We use self-supervision to initialize weights of our
network on the DeepLesion dataset to further imbibe domain knowledge. Our
proposed Domain Knowledge augmented Multi-head Attention based Universal Lesion
Detection Network DMKA-ULD produces refined and precise bounding boxes around
lesions across different organs. We evaluate the efficacy of our network on the
publicly available DeepLesion dataset which comprises of approximately 32K CT
scans with annotated lesions across all organs of the body. Results demonstrate
that we outperform existing state-of-the-art methods achieving an overall
sensitivity of 87.16%.
Related papers
- Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Full-scale Deeply Supervised Attention Network for Segmenting COVID-19
Lesions [0.24366811507669117]
We introduce the Full-scale Deeply Supervised Attention Network (FuDSA-Net) for efficient segmentation of corona-infected lung areas in CT images.
The model considers activation responses from all levels of the encoding path, encompassing multi-scalar features acquired at different levels of the network.
Incorporation of the entire gamut of multi-scalar characteristics into the novel attention mechanism helps prioritize the selection of activation responses and locations containing useful information.
arXiv Detail & Related papers (2022-10-27T16:05:47Z) - OOOE: Only-One-Object-Exists Assumption to Find Very Small Objects in
Chest Radiographs [9.226276232505734]
Many foreign objects like tubes and various anatomical structures are small in comparison to the entire chest X-ray.
We present a simple yet effective Only-One-Object-Exists' (OOOE) assumption to improve the deep network's ability to localize small landmarks in chest radiographs.
arXiv Detail & Related papers (2022-10-13T07:37:33Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Gastrointestinal Polyps and Tumors Detection Based on Multi-scale
Feature-fusion with WCE Sequences [0.0]
This paper proposes a textbfTwo-stage textbfMulti-scale textbfFeature-fusion learning network(textbfTMFNet) to automatically detect small intestinal polyps and tumors.
We used 22,335 WCE images in the experiment, with a total of 123,092 lesion regions used to train the detection framework of this paper.
arXiv Detail & Related papers (2022-04-03T07:24:50Z) - An Efficient Anchor-free Universal Lesion Detection in CT-scans [19.165942326142538]
We propose a robust one-stage anchor-free lesion detection network that can perform well across varying lesions sizes.
We obtain comparable results to the state-of-the-art methods, achieving an overall sensitivity of 86.05% on the DeepLesion dataset.
arXiv Detail & Related papers (2022-03-30T06:01:04Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Multiscale Detection of Cancerous Tissue in High Resolution Slide Scans [0.0]
We present an algorithm for multi-scale tumor (chimeric cell) detection in high resolution slide scans.
Our approach modifies the effective receptive field at different layers in a CNN so that objects with a broad range of varying scales can be detected in a single forward pass.
arXiv Detail & Related papers (2020-10-01T18:56:46Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.