Attention Model Enhanced Network for Classification of Breast Cancer
Image
- URL: http://arxiv.org/abs/2010.03271v1
- Date: Wed, 7 Oct 2020 08:44:21 GMT
- Title: Attention Model Enhanced Network for Classification of Breast Cancer
Image
- Authors: Xiao Kang, Xingbo Liu, Xiushan Nie, Xiaoming Xi, Yilong Yin
- Abstract summary: AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
- Score: 54.83246945407568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Breast cancer classification remains a challenging task due to inter-class
ambiguity and intra-class variability. Existing deep learning-based methods try
to confront this challenge by utilizing complex nonlinear projections. However,
these methods typically extract global features from entire images, neglecting
the fact that the subtle detail information can be crucial in extracting
discriminative features. In this study, we propose a novel method named
Attention Model Enhanced Network (AMEN), which is formulated in a multi-branch
fashion with pixel-wised attention model and classification submodular.
Specifically, the feature learning part in AMEN can generate pixel-wised
attention map, while the classification submodular are utilized to classify the
samples. To focus more on subtle detail information, the sample image is
enhanced by the pixel-wised attention map generated from former branch.
Furthermore, boosting strategy are adopted to fuse classification results from
different branches for better performance. Experiments conducted on three
benchmark datasets demonstrate the superiority of the proposed method under
various scenarios.
Related papers
- Multilevel Saliency-Guided Self-Supervised Learning for Image Anomaly
Detection [15.212031255539022]
Anomaly detection (AD) is a fundamental task in computer vision.
We propose CutSwap, which leverages saliency guidance to incorporate semantic cues for augmentation.
CutSwap achieves state-of-the-art AD performance on two mainstream AD benchmark datasets.
arXiv Detail & Related papers (2023-11-30T08:03:53Z) - Diffusion Models Beat GANs on Image Classification [37.70821298392606]
Diffusion models have risen to prominence as a state-of-the-art method for image generation, denoising, inpainting, super-resolution, manipulation, etc.
We present our findings that these embeddings are useful beyond the noise prediction task, as they contain discriminative information and can also be leveraged for classification.
We find that with careful feature selection and pooling, diffusion models outperform comparable generative-discriminative methods for classification tasks.
arXiv Detail & Related papers (2023-07-17T17:59:40Z) - Semantic Embedded Deep Neural Network: A Generic Approach to Boost
Multi-Label Image Classification Performance [10.257208600853199]
We introduce a generic semantic-embedding deep neural network to apply the spatial awareness semantic feature.
We observed an Avg.relative improvement of 15.27% in terms of AUC score across all labels compared to the baseline approach.
arXiv Detail & Related papers (2023-05-09T07:44:52Z) - Joint localization and classification of breast tumors on ultrasound
images using a novel auxiliary attention-based framework [7.6620616780444974]
We propose a novel joint localization and classification model based on the attention mechanism and disentangled semi-supervised learning strategy.
The proposed modularized framework allows flexible network replacement to be generalized for various applications.
arXiv Detail & Related papers (2022-10-11T20:14:13Z) - Fine-Grained Visual Classification using Self Assessment Classifier [12.596520707449027]
Extracting discriminative features plays a crucial role in the fine-grained visual classification task.
In this paper, we introduce a Self Assessment, which simultaneously leverages the representation of the image and top-k prediction classes.
We show that our method achieves new state-of-the-art results on CUB200-2011, Stanford Dog, and FGVC Aircraft datasets.
arXiv Detail & Related papers (2022-05-21T07:41:27Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Enhancing Fine-Grained Classification for Low Resolution Images [97.82441158440527]
Low resolution images suffer from the inherent challenge of limited information content and the absence of fine details useful for sub-category classification.
This research proposes a novel attribute-assisted loss, which utilizes ancillary information to learn discriminative features for classification.
The proposed loss function enables a model to learn class-specific discriminative features, while incorporating attribute-level separability.
arXiv Detail & Related papers (2021-05-01T13:19:02Z) - Saliency-driven Class Impressions for Feature Visualization of Deep
Neural Networks [55.11806035788036]
It is advantageous to visualize the features considered to be essential for classification.
Existing visualization methods develop high confidence images consisting of both background and foreground features.
In this work, we propose a saliency-driven approach to visualize discriminative features that are considered most important for a given task.
arXiv Detail & Related papers (2020-07-31T06:11:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.