Multi-View Hypercomplex Learning for Breast Cancer Screening
- URL: http://arxiv.org/abs/2204.05798v3
- Date: Mon, 4 Mar 2024 16:43:14 GMT
- Title: Multi-View Hypercomplex Learning for Breast Cancer Screening
- Authors: Eleonora Lopez, Eleonora Grassucci, Martina Valleriani, Danilo
Comminiello
- Abstract summary: Traditionally, deep learning methods for breast cancer classification perform a single-view analysis.
radiologists simultaneously analyze all four views that compose a mammography exam.
We propose a methodological approach for multi-view breast cancer classification based on parameterized hypercomplex neural networks.
- Score: 7.147856898682969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally, deep learning methods for breast cancer classification perform
a single-view analysis. However, radiologists simultaneously analyze all four
views that compose a mammography exam, owing to the correlations contained in
mammography views, which present crucial information for identifying tumors. In
light of this, some studies have started to propose multi-view methods.
Nevertheless, in such existing architectures, mammogram views are processed as
independent images by separate convolutional branches, thus losing correlations
among them. To overcome such limitations, in this paper, we propose a
methodological approach for multi-view breast cancer classification based on
parameterized hypercomplex neural networks. Thanks to hypercomplex algebra
properties, our networks are able to model, and thus leverage, existing
correlations between the different views that comprise a mammogram, thus
mimicking the reading process performed by clinicians. This happens because
hypercomplex networks capture both global properties, as standard neural
models, as well as local relations, i.e., inter-view correlations, which
real-valued networks fail at modeling. We define architectures designed to
process two-view exams, namely PHResNets, and four-view exams, i.e., PHYSEnet
and PHYBOnet. Through an extensive experimental evaluation conducted with
publicly available datasets, we demonstrate that our proposed models clearly
outperform real-valued counterparts and state-of-the-art methods, proving that
breast cancer classification benefits from the proposed multi-view
architectures. We also assess the method generalizability beyond mammogram
analysis by considering different benchmarks, as well as a finer-scaled task
such as segmentation. Full code and pretrained models for complete
reproducibility of our experiments are freely available at
https://github.com/ispamm/PHBreast.
Related papers
- MV-Swin-T: Mammogram Classification with Multi-view Swin Transformer [0.257133335028485]
We propose an innovative multi-view network based on transformers to address challenges in mammographic image classification.
Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information.
arXiv Detail & Related papers (2024-02-26T04:41:04Z) - Enhancing Representation in Radiography-Reports Foundation Model: A Granular Alignment Algorithm Using Masked Contrastive Learning [26.425784890859738]
MaCo is a masked contrastive chest X-ray foundation model.
It simultaneously achieves fine-grained image understanding and zero-shot learning for a variety of medical imaging tasks.
It is shown to be superior over 10 state-of-the-art approaches across tasks such as classification, segmentation, detection, and phrase grounding.
arXiv Detail & Related papers (2023-09-12T01:29:37Z) - Classification of lung cancer subtypes on CT images with synthetic
pathological priors [41.75054301525535]
Cross-scale associations exist in the image patterns between the same case's CT images and its pathological images.
We propose self-generating hybrid feature network (SGHF-Net) for accurately classifying lung cancer subtypes on CT images.
arXiv Detail & Related papers (2023-08-09T02:04:05Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Act Like a Radiologist: Towards Reliable Multi-view Correspondence
Reasoning for Mammogram Mass Detection [49.14070210387509]
We propose an Anatomy-aware Graph convolutional Network (AGN) for mammogram mass detection.
AGN is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability.
Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-21T06:48:34Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Automatic Breast Lesion Classification by Joint Neural Analysis of
Mammography and Ultrasound [1.9814912982226993]
We propose a deep-learning based method for classifying breast cancer lesions from their respective mammography and ultrasound images.
The proposed approach is based on a GoogleNet architecture, fine-tuned for our data in two training steps.
It achieves an AUC of 0.94, outperforming state-of-the-art models trained over a single modality.
arXiv Detail & Related papers (2020-09-23T09:08:24Z) - Dual Convolutional Neural Networks for Breast Mass Segmentation and
Diagnosis in Mammography [18.979126709943085]
We introduce a novel deep learning framework for mammogram image processing, which computes mass segmentation and simultaneously predict diagnosis results.
Our method is constructed in a dual-path architecture that solves the mapping in a dual-problem manner.
Experimental results show that DualCoreNet achieves the best mammography segmentation and classification simultaneously, outperforming recent state-of-the-art models.
arXiv Detail & Related papers (2020-08-07T02:23:36Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.