Rethinking Interactive Image Segmentation: Feature Space Annotation
- URL: http://arxiv.org/abs/2101.04378v1
- Date: Tue, 12 Jan 2021 10:13:35 GMT
- Title: Rethinking Interactive Image Segmentation: Feature Space Annotation
- Authors: Jord\~ao Bragantini (UNICAMP), Alexandre Falc\~ao (UNICAMP), Laurent
Najman (ligm)
- Abstract summary: We propose interactive and simultaneous segment annotation from multiple images guided by feature space projection.
We show that our approach can surpass the accuracy of state-of-the-art methods in foreground segmentation datasets.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the progress of interactive image segmentation methods, high-quality
pixel-level annotation is still time-consuming and laborious -- a bottleneck
for several deep learning applications. We take a step back to propose
interactive and simultaneous segment annotation from multiple images guided by
feature space projection and optimized by metric learning as the labeling
progresses. This strategy is in stark contrast to existing interactive
segmentation methodologies, which perform annotation in the image domain. We
show that our approach can surpass the accuracy of state-of-the-art methods in
foreground segmentation datasets: iCoSeg, DAVIS, and Rooftop. Moreover, it
achieves 91.5\% accuracy in a known semantic segmentation dataset, Cityscapes,
being 74.75 times faster than the original annotation procedure. The appendix
presents additional qualitative results. Code and video demonstration will be
released upon publication.
Related papers
- IFSENet : Harnessing Sparse Iterations for Interactive Few-shot Segmentation Excellence [2.822194296769473]
Few-shot segmentation techniques reduce the required number of images to learn to segment a new class.
interactive segmentation techniques only focus on incrementally improving the segmentation of one object at a time.
We combine the two concepts to drastically reduce the effort required to train segmentation models for novel classes.
arXiv Detail & Related papers (2024-03-22T10:15:53Z) - Correlation-aware active learning for surgery video segmentation [13.327429312047396]
This work proposes a novel AL strategy for surgery video segmentation, COWAL, COrrelation-aWare Active Learning.
Our approach involves projecting images into a latent space that has been fine-tuned using contrastive learning and then selecting a fixed number of representative images from local clusters of video frames.
We demonstrate the effectiveness of this approach on two video datasets of surgical instruments and three real-world video datasets.
arXiv Detail & Related papers (2023-11-15T09:30:52Z) - Learning Semantic Segmentation with Query Points Supervision on Aerial Images [57.09251327650334]
We present a weakly supervised learning algorithm to train semantic segmentation algorithms.
Our proposed approach performs accurate semantic segmentation and improves efficiency by significantly reducing the cost and time required for manual annotation.
arXiv Detail & Related papers (2023-09-11T14:32:04Z) - From colouring-in to pointillism: revisiting semantic segmentation
supervision [48.637031591058175]
We propose a pointillist approach for semantic segmentation annotation, where only point-wise yes/no questions are answered.
We collected and released 22.6M point labels over 4,171 classes on the Open Images dataset.
arXiv Detail & Related papers (2022-10-25T16:42:03Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [128.03739769844736]
Two neural co-attentions are incorporated into the classifier to capture cross-image semantic similarities and differences.
In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference.
Our algorithm sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability.
arXiv Detail & Related papers (2020-07-03T21:53:46Z) - Few-Shot Semantic Segmentation Augmented with Image-Level Weak
Annotations [23.02986307143718]
Recent progress in fewshot semantic segmentation tackles the issue by only a few pixel-level annotated examples.
Our key idea is to learn a better prototype representation of the class by fusing the knowledge from the image-level labeled data.
We propose a new framework, called PAIA, to learn the class prototype representation in a metric space by integrating image-level annotations.
arXiv Detail & Related papers (2020-07-03T04:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.