Towards Interpretable Attention Networks for Cervical Cancer Analysis
- URL: http://arxiv.org/abs/2106.00557v1
- Date: Thu, 27 May 2021 13:28:24 GMT
- Title: Towards Interpretable Attention Networks for Cervical Cancer Analysis
- Authors: Ruiqi Wang, Mohammad Ali Armin, Simon Denman, Lars Petersson, David
Ahmedt-Aristizabal
- Abstract summary: We evaluate various state-of-the-art deep learning models for the classification of images of multiple cervical cells.
We show the effectiveness of the residual channel attention model for extracting important features from a group of cells.
It also provides interpretable models to address the classification of cervical cells.
- Score: 24.916577293892182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have enabled the development of automated
frameworks for analysing medical images and signals, including analysis of
cervical cancer. Many previous works focus on the analysis of isolated cervical
cells, or do not offer sufficient methods to explain and understand how the
proposed models reach their classification decisions on multi-cell images.
Here, we evaluate various state-of-the-art deep learning models and
attention-based frameworks for the classification of images of multiple
cervical cells. As we aim to provide interpretable deep learning models to
address this task, we also compare their explainability through the
visualization of their gradients. We demonstrate the importance of using images
that contain multiple cells over using isolated single-cell images. We show the
effectiveness of the residual channel attention model for extracting important
features from a group of cells, and demonstrate this model's efficiency for
this classification task. This work highlights the benefits of channel
attention mechanisms in analyzing multiple-cell images for potential relations
and distributions within a group of cells. It also provides interpretable
models to address the classification of cervical cells.
Related papers
- Explanations of Classifiers Enhance Medical Image Segmentation via
End-to-end Pre-training [37.11542605885003]
Medical image segmentation aims to identify and locate abnormal structures in medical images, such as chest radiographs, using deep neural networks.
Our work collects explanations from well-trained classifiers to generate pseudo labels of segmentation tasks.
We then use Integrated Gradients (IG) method to distill and boost the explanations obtained from the classifiers, generating massive diagnosis-oriented localization labels (DoLL)
These DoLL-annotated images are used for pre-training the model before fine-tuning it for downstream segmentation tasks, including COVID-19 infectious areas, lungs, heart, and clavicles.
arXiv Detail & Related papers (2024-01-16T16:18:42Z) - Single-Cell Deep Clustering Method Assisted by Exogenous Gene
Information: A Novel Approach to Identifying Cell Types [50.55583697209676]
We develop an attention-enhanced graph autoencoder, which is designed to efficiently capture the topological features between cells.
During the clustering process, we integrated both sets of information and reconstructed the features of both cells and genes to generate a discriminative representation.
This research offers enhanced insights into the characteristics and distribution of cells, thereby laying the groundwork for early diagnosis and treatment of diseases.
arXiv Detail & Related papers (2023-11-28T09:14:55Z) - Tertiary Lymphoid Structures Generation through Graph-based Diffusion [54.37503714313661]
In this work, we leverage state-of-the-art graph-based diffusion models to generate biologically meaningful cell-graphs.
We show that the adopted graph diffusion model is able to accurately learn the distribution of cells in terms of their tertiary lymphoid structures (TLS) content.
arXiv Detail & Related papers (2023-10-10T14:37:17Z) - Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI [1.049712834719005]
We present a novel method to automatically classify medical images that learns and leverages weak causal signals in the image.
Our framework consists of a convolutional neural network backbone and a causality-extractor module.
Our findings show that causal relationships among features play a crucial role in enhancing the model's ability to discern relevant information.
arXiv Detail & Related papers (2023-09-19T16:08:33Z) - Exploiting Causality Signals in Medical Images: A Pilot Study with
Empirical Results [1.2400966570867322]
We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes.
This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image.
Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene.
arXiv Detail & Related papers (2023-09-19T08:00:26Z) - Attention De-sparsification Matters: Inducing Diversity in Digital
Pathology Representation Learning [31.192429592497692]
DiRL is a Diversity-inducing Representation Learning technique for histopathology imaging.
We propose a prior-guided dense pretext task for SSL, designed to match the multiple corresponding representations between the views.
arXiv Detail & Related papers (2023-09-12T17:59:10Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - CCRL: Contrastive Cell Representation Learning [0.0]
We propose Contrastive Cell Representation Learning (CCRL) model for cell identification in H&E slides.
We show that this model can outperform all currently available cell clustering models by a large margin across two datasets from different tissue types.
arXiv Detail & Related papers (2022-08-12T18:12:03Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.