SAMBA: A Trainable Segmentation Web-App with Smart Labelling
- URL: http://arxiv.org/abs/2312.04197v1
- Date: Thu, 7 Dec 2023 10:31:05 GMT
- Title: SAMBA: A Trainable Segmentation Web-App with Smart Labelling
- Authors: Ronan Docherty, Isaac Squires, Antonis Vamvakeros, Samuel J. Cooper
- Abstract summary: SAMBA is a trainable segmentation tool that uses Meta's Segment Anything Model (SAM) for fast, high-quality label suggestions.
The segmentation backend is run in the cloud, so does not require the user to have powerful hardware.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmentation is the assigning of a semantic class to every pixel in an image
and is a prerequisite for various statistical analysis tasks in materials
science, like phase quantification, physics simulations or morphological
characterization. The wide range of length scales, imaging techniques and
materials studied in materials science means any segmentation algorithm must
generalise to unseen data and support abstract, user-defined semantic classes.
Trainable segmentation is a popular interactive segmentation paradigm where a
classifier is trained to map from image features to user drawn labels. SAMBA is
a trainable segmentation tool that uses Meta's Segment Anything Model (SAM) for
fast, high-quality label suggestions and a random forest classifier for robust,
generalizable segmentations. It is accessible in the browser
(https://www.sambasegment.com/) without the need to download any external
dependencies. The segmentation backend is run in the cloud, so does not require
the user to have powerful hardware.
Related papers
- USE: Universal Segment Embeddings for Open-Vocabulary Image Segmentation [33.11010205890195]
The main challenge in open-vocabulary image segmentation now lies in accurately classifying these segments into text-defined categories.
We introduce the Universal Segment Embedding (USE) framework to address this challenge.
This framework is comprised of two key components: 1) a data pipeline designed to efficiently curate a large amount of segment-text pairs at various granularities, and 2) a universal segment embedding model that enables precise segment classification into a vast range of text-defined categories.
arXiv Detail & Related papers (2024-06-07T21:41:18Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - Synthetic Instance Segmentation from Semantic Image Segmentation Masks [15.477053085267404]
We propose a novel paradigm called Synthetic Instance (SISeg)
SISeg instance segmentation results by leveraging image masks generated by existing semantic segmentation models.
In other words, the proposed model does not need extra manpower or higher computational expenses.
arXiv Detail & Related papers (2023-08-02T05:13:02Z) - SegGPT: Segmenting Everything In Context [98.98487097934067]
We present SegGPT, a model for segmenting everything in context.
We unify various segmentation tasks into a generalist in-context learning framework.
SegGPT can perform arbitrary segmentation tasks in images or videos via in-context inference.
arXiv Detail & Related papers (2023-04-06T17:59:57Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Semantic Segmentation In-the-Wild Without Seeing Any Segmentation
Examples [34.97652735163338]
We propose a novel approach for creating semantic segmentation masks for every object.
Our method takes as input the image-level labels of the class categories present in the image.
The output of this stage provides pixel-level pseudo-labels, instead of the manual pixel-level labels required by supervised methods.
arXiv Detail & Related papers (2021-12-06T17:32:38Z) - Segmenter: Transformer for Semantic Segmentation [79.9887988699159]
We introduce Segmenter, a transformer model for semantic segmentation.
We build on the recent Vision Transformer (ViT) and extend it to semantic segmentation.
It outperforms the state of the art on the challenging ADE20K dataset and performs on-par on Pascal Context and Cityscapes.
arXiv Detail & Related papers (2021-05-12T13:01:44Z) - Semantically Meaningful Class Prototype Learning for One-Shot Image
Semantic Segmentation [58.96902899546075]
One-shot semantic image segmentation aims to segment the object regions for the novel class with only one annotated image.
Recent works adopt the episodic training strategy to mimic the expected situation at testing time.
We propose to leverage the multi-class label information during the episodic training. It will encourage the network to generate more semantically meaningful features for each category.
arXiv Detail & Related papers (2021-02-22T12:07:35Z) - An Auto-Encoder Strategy for Adaptive Image Segmentation [18.333542893112007]
We propose a novel perspective of segmentation as a discrete representation learning problem.
We present a variational autoencoder segmentation strategy that is flexible and adaptive.
We demonstrate that a Markov Random Field prior can yield significantly better results than a spatially independent prior.
arXiv Detail & Related papers (2020-04-29T00:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.