Improving Few-Shot Part Segmentation using Coarse Supervision
- URL: http://arxiv.org/abs/2204.05393v1
- Date: Mon, 11 Apr 2022 20:25:14 GMT
- Title: Improving Few-Shot Part Segmentation using Coarse Supervision
- Authors: Oindrila Saha, Zezhou Cheng and Subhransu Maji
- Abstract summary: A key challenge is that these annotations were collected for different tasks and with different labeling styles and cannot be readily mapped to the part labels.
We propose to jointly learn the dependencies between labeling styles and the part segmentation model, allowing us to utilize supervision from diverse labels.
Our approach outperforms baselines based on multi-task learning, semi-supervised learning, and competitive methods relying on loss functions manually designed to exploit sparse-supervision.
- Score: 32.34210260693939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A significant bottleneck in training deep networks for part segmentation is
the cost of obtaining detailed annotations. We propose a framework to exploit
coarse labels such as figure-ground masks and keypoint locations that are
readily available for some categories to improve part segmentation models. A
key challenge is that these annotations were collected for different tasks and
with different labeling styles and cannot be readily mapped to the part labels.
To this end, we propose to jointly learn the dependencies between labeling
styles and the part segmentation model, allowing us to utilize supervision from
diverse labels. To evaluate our approach we develop a benchmark on the
Caltech-UCSD birds and OID Aircraft dataset. Our approach outperforms baselines
based on multi-task learning, semi-supervised learning, and competitive methods
relying on loss functions manually designed to exploit sparse-supervision.
Related papers
- Exploring Open-Vocabulary Semantic Segmentation without Human Labels [76.15862573035565]
We present ZeroSeg, a novel method that leverages the existing pretrained vision-language model (VL) to train semantic segmentation models.
ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image.
Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data.
arXiv Detail & Related papers (2023-06-01T08:47:06Z) - LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds [62.49198183539889]
We propose a label-efficient semantic segmentation pipeline for outdoor scenes with LiDAR point clouds.
Our method co-designs an efficient labeling process with semi/weakly supervised learning.
Our proposed method is even highly competitive compared to the fully supervised counterpart with 100% labels.
arXiv Detail & Related papers (2022-10-14T19:13:36Z) - A Survey on Label-efficient Deep Segmentation: Bridging the Gap between
Weak Supervision and Dense Prediction [115.9169213834476]
This paper offers a comprehensive review on label-efficient segmentation methods.
We first develop a taxonomy to organize these methods according to the supervision provided by different types of weak labels.
Next, we summarize the existing label-efficient segmentation methods from a unified perspective.
arXiv Detail & Related papers (2022-07-04T06:21:01Z) - Incremental Learning in Semantic Segmentation from Image Labels [18.404068463921426]
Existing semantic segmentation approaches achieve impressive results, but struggle to update their models incrementally as new categories are uncovered.
This paper proposes a novel framework for Weakly Incremental Learning for Semantics, that aims at learning to segment new classes from cheap and largely available image-level labels.
As opposed to existing approaches, that need to generate pseudo-labels offline, we use an auxiliary classifier, trained with image-level labels and regularized by the segmentation model, to obtain pseudo-supervision online and update the model incrementally.
arXiv Detail & Related papers (2021-12-03T12:47:12Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.