Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
- URL: http://arxiv.org/abs/2306.14289v2
- Date: Sat, 1 Jul 2023 07:26:22 GMT
- Title: Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
- Authors: Chaoning Zhang, Dongshen Han, Yu Qiao, Jung Uk Kim, Sung-Ho Bae,
Seungkyu Lee, Choong Seon Hong
- Abstract summary: This work aims to make Segment Anything Model (SAM) mobile-friendly by replacing the heavyweight image encoder with a lightweight one.
We distill the knowledge from the heavy image encoder to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM.
The resulting lightweight SAM is termed MobileSAM which is more than 60 times smaller yet performs on par with the original SAM.
- Score: 47.177751899636164
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Segment Anything Model (SAM) has attracted significant attention due to its
impressive zero-shot transfer performance and high versatility for numerous
vision applications (like image editing with fine-grained control). Many of
such applications need to be run on resource-constraint edge devices, like
mobile phones. In this work, we aim to make SAM mobile-friendly by replacing
the heavyweight image encoder with a lightweight one. A naive way to train such
a new SAM as in the original SAM paper leads to unsatisfactory performance,
especially when limited training sources are available. We find that this is
mainly caused by the coupled optimization of the image encoder and mask
decoder, motivated by which we propose decoupled distillation. Concretely, we
distill the knowledge from the heavy image encoder (ViT-H in the original SAM)
to a lightweight image encoder, which can be automatically compatible with the
mask decoder in the original SAM. The training can be completed on a single GPU
within less than one day, and the resulting lightweight SAM is termed MobileSAM
which is more than 60 times smaller yet performs on par with the original SAM.
For inference speed, With a single GPU, MobileSAM runs around 10ms per image:
8ms on the image encoder and 4ms on the mask decoder. With superior
performance, our MobileSAM is around 5 times faster than the concurrent FastSAM
and 7 times smaller, making it more suitable for mobile applications. Moreover,
we show that MobileSAM can run relatively smoothly on CPU. The code for our
project is provided at
\href{https://github.com/ChaoningZhang/MobileSAM}{\textcolor{red}{MobileSAM}}),
with a demo showing that MobileSAM can run relatively smoothly on CPU.
Related papers
- MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - SAM-Lightening: A Lightweight Segment Anything Model with Dilated Flash Attention to Achieve 30 times Acceleration [6.515075311704396]
Segment Anything Model (SAM) has garnered significant attention in segmentation tasks due to their zero-shot generalization ability.
We introduce SAM-Lightening, a variant of SAM, that features a re-engineered attention mechanism, termed Dilated Flash Attention.
Experiments on COCO and LVIS reveal that SAM-Lightening significantly outperforms the state-of-the-art methods in both run-time efficiency and segmentation accuracy.
arXiv Detail & Related papers (2024-03-14T09:07:34Z) - TinySAM: Pushing the Envelope for Efficient Segment Anything Model [76.21007576954035]
We propose a framework to obtain a tiny segment anything model (TinySAM) while maintaining the strong zero-shot performance.
We first propose a full-stage knowledge distillation method with hard prompt sampling and hard mask weighting strategy to distill a lightweight student model.
We also adapt the post-training quantization to the promptable segmentation task and further reduce the computational cost.
arXiv Detail & Related papers (2023-12-21T12:26:11Z) - EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM [71.868623296582]
EdgeSAM is an accelerated variant of the Segment Anything Model (SAM)
Our approach involves distilling the original ViT-based SAM image encoder into a purely CNN-based architecture.
It is the first SAM variant that can run at over 30 FPS on an iPhone 14.
arXiv Detail & Related papers (2023-12-11T18:59:52Z) - RepViT-SAM: Towards Real-Time Segmenting Anything [71.94042743317937]
Segment Anything Model (SAM) has shown impressive zero-shot transfer performance for various computer vision tasks.
MobileSAM proposes to replace the heavyweight image encoder in SAM with TinyViT by employing distillation.
RepViT-SAM can enjoy significantly better zero-shot transfer capability than MobileSAM, along with nearly $10times$ faster inference speed.
arXiv Detail & Related papers (2023-12-10T04:42:56Z) - EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment
Anything [36.553867358541154]
Segment Anything Model (SAM) has emerged as a powerful tool for numerous vision applications.
We propose EfficientSAMs, light-weight SAM models that exhibits decent performance with largely reduced complexity.
Our idea is based on leveraging masked image pretraining, SAMI, which learns to reconstruct features from SAM image encoder for effective visual representation learning.
arXiv Detail & Related papers (2023-12-01T18:31:00Z) - AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt
Encoder [101.28268762305916]
In this work, we replace Segment Anything Model with an encoder that operates on the same input image.
We obtain state-of-the-art results on multiple medical images and video benchmarks.
For inspecting the knowledge within it, and providing a lightweight segmentation solution, we also learn to decode it into a mask by a shallow deconvolution network.
arXiv Detail & Related papers (2023-06-10T07:27:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.