Structure Regularized Attentive Network for Automatic Femoral Head
Necrosis Diagnosis and Localization
- URL: http://arxiv.org/abs/2208.10695v1
- Date: Tue, 23 Aug 2022 02:31:38 GMT
- Title: Structure Regularized Attentive Network for Automatic Femoral Head
Necrosis Diagnosis and Localization
- Authors: Lingfeng Li, Huaiwei Cong, Gangming Zhao, Junran Peng, Zheng Zhang,
and Jinpeng Li
- Abstract summary: We propose the structure regularized network attentive (SRANet) to highlight the necrotic regions during classification based on patch attention.
SRANet extracts features in chunks of images, obtains weight via the attention mechanism to aggregate the features, and constrains them by a structural regularizer with prior knowledge to improve the generalization.
Experimental results show that SRANet is superior to CNNs for AVNFH classification, moreover, it can localize lesions and provide more information to assist doctors in diagnosis.
- Score: 12.95252724282746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, several works have adopted the convolutional neural network
(CNN) to diagnose the avascular necrosis of the femoral head (AVNFH) based on
X-ray images or magnetic resonance imaging (MRI). However, due to the tissue
overlap, X-ray images are difficult to provide fine-grained features for early
diagnosis. MRI, on the other hand, has a long imaging time, is more expensive,
making it impractical in mass screening. Computed tomography (CT) shows
layer-wise tissues, is faster to image, and is less costly than MRI. However,
to our knowledge, there is no work on CT-based automated diagnosis of AVNFH. In
this work, we collected and labeled a large-scale dataset for AVNFH ranking. In
addition, existing end-to-end CNNs only yields the classification result and
are difficult to provide more information for doctors in diagnosis. To address
this issue, we propose the structure regularized attentive network (SRANet),
which is able to highlight the necrotic regions during classification based on
patch attention. SRANet extracts features in chunks of images, obtains weight
via the attention mechanism to aggregate the features, and constrains them by a
structural regularizer with prior knowledge to improve the generalization.
SRANet was evaluated on our AVNFH-CT dataset. Experimental results show that
SRANet is superior to CNNs for AVNFH classification, moreover, it can localize
lesions and provide more information to assist doctors in diagnosis. Our codes
are made public at https://github.com/tomas-lilingfeng/SRANet.
Related papers
- NEURO HAND: A weakly supervised Hierarchical Attention Network for
interpretable neuroimaging abnormality Detection [0.516706940452805]
We present a hierarchical attention network for abnormality detection using MRI scans obtained in a clinical hospital setting.
The proposed network is suitable for non-volumetric data (i.e. stacks of high-resolution MRI slices) and can be trained from binary examination-level labels.
arXiv Detail & Related papers (2023-11-06T09:55:19Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Fusion of convolution neural network, support vector machine and Sobel
filter for accurate detection of COVID-19 patients using X-ray images [14.311213877254348]
The coronavirus (COVID-19) is currently the most common contagious disease which is prevalent all over the world.
It is essential to use an automatic diagnosis system along with clinical procedures for the rapid diagnosis of COVID-19 to prevent its spread.
In this study, a fusion of convolutional neural network (CNN), support vector machine (SVM), and Sobel filter is proposed to detect COVID-19 using X-ray images.
arXiv Detail & Related papers (2021-02-13T08:08:36Z) - A Deep Learning Study on Osteosarcoma Detection from Histological Images [6.341765152919201]
The most common type of primary malignant bone tumor is osteosarcoma.
CNNs can significantly decrease surgeon's workload and make a better prognosis of patient conditions.
CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance.
arXiv Detail & Related papers (2020-11-02T18:16:17Z) - Experimenting with Convolutional Neural Network Architectures for the
automatic characterization of Solitary Pulmonary Nodules' malignancy rating [0.0]
Early and automatic diagnosis of Solitary Pulmonary Nodules (SPN) in Computer Tomography (CT) chest scans can provide early treatment as well as doctor liberation from time-consuming procedures.
In this study, we consider the problem of diagnostic classification between benign and malignant lung nodules in CT images derived from a PET/CT scanner.
More specifically, we intend to develop experimental Convolutional Neural Network (CNN) architectures and conduct experiments, by tuning their parameters, to investigate their behavior, and to define the optimal setup for the accurate classification.
arXiv Detail & Related papers (2020-03-15T11:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.