MpoxMamba: A Grouped Mamba-based Lightweight Hybrid Network for Mpox Detection
- URL: http://arxiv.org/abs/2409.04218v2
- Date: Sun, 15 Sep 2024 17:52:07 GMT
- Title: MpoxMamba: A Grouped Mamba-based Lightweight Hybrid Network for Mpox Detection
- Authors: Yubiao Yue, Jun Xue, Haihuang Liang, Zhenzhang Li, Yufeng Wang,
- Abstract summary: Mpox virus continues to spread worldwide and has been declared a public health emergency of international concern by the World Health Organization.
Deep learning model-based detection systems are crucial to alleviate mpox outbreaks since they are suitable for widespread deployment.
We propose a lightweight hybrid architecture called MpoxMamba for efficient mpox detection.
- Score: 1.9861301166025644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the lack of effective mpox detection tools, the mpox virus continues to spread worldwide and has once again been declared a public health emergency of international concern by the World Health Organization. Lightweight deep learning model-based detection systems are crucial to alleviate mpox outbreaks since they are suitable for widespread deployment, especially in resource-limited scenarios. However, the key to its successful application depends on ensuring that the model can effectively model local features and long-range dependencies in mpox lesions while maintaining lightweight. Inspired by the success of Mamba in modeling long-range dependencies and its linear complexity, we proposed a lightweight hybrid architecture called MpoxMamba for efficient mpox detection. MpoxMamba utilizes depth-wise separable convolutions to extract local feature representations in mpox skin lesions and greatly enhances the model's ability to model the global contextual information by grouped Mamba modules. Notably, MpoxMamba's parameter size and FLOPs are 0.77M and 0.53G, respectively. Experimental results on two widely recognized benchmark datasets demonstrate that MpoxMamba outperforms state-of-the-art lightweight models and existing mpox detection methods. Importantly, we developed a web-based online application to provide free mpox detection (http://5227i971s5.goho.co:30290). The source codes of MpoxMamba are available at https://github.com/YubiaoYue/MpoxMamba.
Related papers
- Mamba Policy: Towards Efficient 3D Diffusion Policy with Hybrid Selective State Models [20.956716048789474]
Mamba model has emerged as a promising solution for efficient modeling.
We propose the Mamba Policy, which reduces the parameter count by over 80% compared to the original policy network.
Extensive experiments demonstrate that the Mamba Policy excels on the Adroit, Dexart, and MetaWorld datasets.
arXiv Detail & Related papers (2024-09-11T10:21:21Z) - ReMamba: Equip Mamba with Effective Long-Sequence Modeling [50.530839868893786]
We propose ReMamba, which enhances Mamba's ability to comprehend long contexts.
ReMamba incorporates selective compression and adaptation techniques within a two-stage re-forward process.
arXiv Detail & Related papers (2024-08-28T02:47:27Z) - LaMamba-Diff: Linear-Time High-Fidelity Diffusion Models Based on Local Attention and Mamba [54.85262314960038]
Local Attentional Mamba blocks capture both global contexts and local details with linear complexity.
Our model exhibits exceptional scalability and surpasses the performance of DiT across various model scales on ImageNet at 256x256 resolution.
Compared to state-of-the-art diffusion models on ImageNet 256x256 and 512x512, our largest model presents notable advantages, such as a reduction of up to 62% GFLOPs.
arXiv Detail & Related papers (2024-08-05T16:39:39Z) - MambaLRP: Explaining Selective State Space Sequence Models [18.133138020777295]
Recent sequence modeling approaches using selective state space sequence models, referred to as Mamba models, have seen a surge of interest.
These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a wide range of applications such as language modeling.
To foster their reliable use in real-world scenarios, it is crucial to augment their transparency.
arXiv Detail & Related papers (2024-06-11T12:15:47Z) - MambaUIE&SR: Unraveling the Ocean's Secrets with Only 2.8 GFLOPs [1.7648680700685022]
Underwater Image Enhancement (UIE) techniques aim to address the problem of underwater image degradation due to light absorption and scattering.
Recent years, both Convolution Neural Network (CNN)-based and Transformer-based methods have been widely explored.
MambaUIE is able to efficiently synthesize global and local information and maintains a very small number of parameters with high accuracy.
arXiv Detail & Related papers (2024-04-22T05:12:11Z) - MedMamba: Vision Mamba for Medical Image Classification [0.0]
Vision transformers (ViTs) and convolutional neural networks (CNNs) have been extensively studied and widely used in medical image classification tasks.
Recent studies have shown that state space models (SSMs) represented by Mamba can effectively model long-range dependencies.
We propose MedMamba, the first Vision Mamba for generalized medical image classification.
arXiv Detail & Related papers (2024-03-06T16:49:33Z) - MiM-ISTD: Mamba-in-Mamba for Efficient Infrared Small Target Detection [72.46396769642787]
We develop a nested structure, Mamba-in-Mamba (MiM-ISTD), for efficient infrared small target detection.
MiM-ISTD is $8 times$ faster than the SOTA method and reduces GPU memory usage by 62.2$%$ when testing on $2048 times 2048$ images.
arXiv Detail & Related papers (2024-03-04T15:57:29Z) - PointMamba: A Simple State Space Model for Point Cloud Analysis [65.59944745840866]
We propose PointMamba, transferring the success of Mamba, a recent representative state space model (SSM), from NLP to point cloud analysis tasks.
Unlike traditional Transformers, PointMamba employs a linear complexity algorithm, presenting global modeling capacity while significantly reducing computational costs.
arXiv Detail & Related papers (2024-02-16T14:56:13Z) - Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining [85.08169822181685]
This paper introduces a novel Mamba-based model, Swin-UMamba, designed specifically for medical image segmentation tasks.
Swin-UMamba demonstrates superior performance with a large margin compared to CNNs, ViTs, and latest Mamba-based models.
arXiv Detail & Related papers (2024-02-05T18:58:11Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.