HiFuse: Hierarchical Multi-Scale Feature Fusion Network for Medical
Image Classification
- URL: http://arxiv.org/abs/2209.10218v1
- Date: Wed, 21 Sep 2022 09:30:20 GMT
- Title: HiFuse: Hierarchical Multi-Scale Feature Fusion Network for Medical
Image Classification
- Authors: Xiangzuo Huo, Gang Sun, Shengwei Tian, Yan Wang, Long Yu, Jun Long,
Wendong Zhang, Aolun Li
- Abstract summary: This paper proposes a three-branch hierarchical multi-scale feature fusion network structure termed as HiFuse for medical image classification.
The accuracy of our proposed model on the ISIC dataset is 7.6% higher than baseline, 21.5% on the Covid-19 dataset, and 10.4% on the Kvasir dataset.
- Score: 16.455887856811465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image classification has developed rapidly under the impetus of the
convolutional neural network (CNN). Due to the fixed size of the receptive
field of the convolution kernel, it is difficult to capture the global features
of medical images. Although the self-attention-based Transformer can model
long-range dependencies, it has high computational complexity and lacks local
inductive bias. Much research has demonstrated that global and local features
are crucial for image classification. However, medical images have a lot of
noisy, scattered features, intra-class variation, and inter-class similarities.
This paper proposes a three-branch hierarchical multi-scale feature fusion
network structure termed as HiFuse for medical image classification as a new
method. It can fuse the advantages of Transformer and CNN from multi-scale
hierarchies without destroying the respective modeling so as to improve the
classification accuracy of various medical images. A parallel hierarchy of
local and global feature blocks is designed to efficiently extract local
features and global representations at various semantic scales, with the
flexibility to model at different scales and linear computational complexity
relevant to image size. Moreover, an adaptive hierarchical feature fusion block
(HFF block) is designed to utilize the features obtained at different
hierarchical levels comprehensively. The HFF block contains spatial attention,
channel attention, residual inverted MLP, and shortcut to adaptively fuse
semantic information between various scale features of each branch. The
accuracy of our proposed model on the ISIC2018 dataset is 7.6% higher than
baseline, 21.5% on the Covid-19 dataset, and 10.4% on the Kvasir dataset.
Compared with other advanced models, the HiFuse model performs the best. Our
code is open-source and available from https://github.com/huoxiangzuo/HiFuse.
Related papers
- BEFUnet: A Hybrid CNN-Transformer Architecture for Precise Medical Image
Segmentation [0.0]
This paper proposes an innovative U-shaped network called BEFUnet, which enhances the fusion of body and edge information for precise medical image segmentation.
The BEFUnet comprises three main modules, including a novel Local Cross-Attention Feature (LCAF) fusion module, a novel Double-Level Fusion (DLF) module, and dual-branch encoder.
The LCAF module efficiently fuses edge and body features by selectively performing local cross-attention on features that are spatially close between the two modalities.
arXiv Detail & Related papers (2024-02-13T21:03:36Z) - PMFSNet: Polarized Multi-scale Feature Self-attention Network For
Lightweight Medical Image Segmentation [6.134314911212846]
Current state-of-the-art medical image segmentation methods prioritize accuracy but often at the expense of increased computational demands and larger model sizes.
We propose PMFSNet, a novel medical imaging segmentation model that balances global local feature processing while avoiding computational redundancy.
It incorporates a plug-and-play PMFS block, a multi-scale feature enhancement module based on attention mechanisms, to capture long-term dependencies.
arXiv Detail & Related papers (2024-01-15T10:26:47Z) - MultiFusionNet: Multilayer Multimodal Fusion of Deep Neural Networks for
Chest X-Ray Image Classification [16.479941416339265]
Automated systems utilizing convolutional neural networks (CNNs) have shown promise in improving the accuracy and efficiency of chest X-ray image classification.
We propose a novel deep learning-based multilayer multimodal fusion model that emphasizes extracting features from different layers and fusing them.
The proposed model achieves a significantly higher accuracy of 97.21% and 99.60% for both three-class and two-class classifications, respectively.
arXiv Detail & Related papers (2024-01-01T11:50:01Z) - Mutual-Guided Dynamic Network for Image Fusion [51.615598671899335]
We propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs.
Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks.
arXiv Detail & Related papers (2023-08-24T03:50:37Z) - Deep Neural Networks Fused with Textures for Image Classification [20.58839604333332]
Fine-grained image classification is a challenging task in computer vision.
We propose a fusion approach to address FGIC by combining global texture with local patch-based information.
Our method has attained better classification accuracy over existing methods with notable margins.
arXiv Detail & Related papers (2023-08-03T15:21:08Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - How GNNs Facilitate CNNs in Mining Geometric Information from
Large-Scale Medical Images [2.2699159408903484]
We propose a fusion framework for enhancing the global image-level representation captured by convolutional neural networks (CNNs)
We evaluate our fusion strategies on histology datasets curated from large patient cohorts of colorectal and gastric cancers.
arXiv Detail & Related papers (2022-06-15T15:27:48Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Global Filter Networks for Image Classification [90.81352483076323]
We present a conceptually simple yet computationally efficient architecture that learns long-term spatial dependencies in the frequency domain with log-linear complexity.
Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness.
arXiv Detail & Related papers (2021-07-01T17:58:16Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Sequential Hierarchical Learning with Distribution Transformation for
Image Super-Resolution [83.70890515772456]
We build a sequential hierarchical learning super-resolution network (SHSR) for effective image SR.
We consider the inter-scale correlations of features, and devise a sequential multi-scale block (SMB) to progressively explore the hierarchical information.
Experiment results show SHSR achieves superior quantitative performance and visual quality to state-of-the-art methods.
arXiv Detail & Related papers (2020-07-19T01:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.