A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI
- URL: http://arxiv.org/abs/2003.02371v1
- Date: Wed, 4 Mar 2020 23:44:40 GMT
- Title: A Benchmark for LiDAR-based Panoptic Segmentation based on KITTI
- Authors: Jens Behley and Andres Milioto and Cyrill Stachniss
- Abstract summary: We present an extension of Semantic KITTI for training and evaluation of laser-based panoptic segmentation.
We provide the data and discuss the processing steps needed to enrich a given semantic annotation with temporally consistent instance information.
We present two strong baselines that combine state-of-the-art LiDAR-based semantic segmentation approaches with a state-of-the-art detector.
- Score: 44.79849028988664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Panoptic segmentation is the recently introduced task that tackles semantic
segmentation and instance segmentation jointly. In this paper, we present an
extension of SemanticKITTI, which is a large-scale dataset providing dense
point-wise semantic labels for all sequences of the KITTI Odometry Benchmark,
for training and evaluation of laser-based panoptic segmentation. We provide
the data and discuss the processing steps needed to enrich a given semantic
annotation with temporally consistent instance information, i.e., instance
information that supplements the semantic labels and identifies the same
instance over sequences of LiDAR point clouds. Additionally, we present two
strong baselines that combine state-of-the-art LiDAR-based semantic
segmentation approaches with a state-of-the-art detector enriching the
segmentation with instance information and that allow other researchers to
compare their approaches against. We hope that our extension of SemanticKITTI
with strong baselines enables the creation of novel algorithms for LiDAR-based
panoptic segmentation as much as it has for the original semantic segmentation
and semantic scene completion tasks. Data, code, and an online evaluation using
a hidden test set will be published on http://semantic-kitti.org.
Related papers
- Auxiliary Tasks Enhanced Dual-affinity Learning for Weakly Supervised
Semantic Segmentation [79.05949524349005]
We propose AuxSegNet+, a weakly supervised auxiliary learning framework to explore the rich information from saliency maps.
We also propose a cross-task affinity learning mechanism to learn pixel-level affinities from the saliency and segmentation feature maps.
arXiv Detail & Related papers (2024-03-02T10:03:21Z) - P2Seg: Pointly-supervised Segmentation via Mutual Distillation [23.979786026101024]
We develop a Mutual Distillation Module (MDM) to leverage the complementary strengths of both instance position and semantic information.
Our method achieves 55.7 mAP$_50$ and 17.6 mAP on the PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2024-01-18T03:41:38Z) - Lidar Panoptic Segmentation and Tracking without Bells and Whistles [48.078270195629415]
We propose a detection-centric network for lidar segmentation and tracking.
One of the core components of our network is the object instance detection branch.
We evaluate our method on several 3D/4D LPS benchmarks and observe that our model establishes a new state-of-the-art among open-sourced models.
arXiv Detail & Related papers (2023-10-19T04:44:43Z) - Advancing Incremental Few-shot Semantic Segmentation via Semantic-guided
Relation Alignment and Adaptation [98.51938442785179]
Incremental few-shot semantic segmentation aims to incrementally extend a semantic segmentation model to novel classes.
This task faces a severe semantic-aliasing issue between base and novel classes due to data imbalance.
We propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method that fully considers the guidance of prior semantic information.
arXiv Detail & Related papers (2023-05-18T10:40:52Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - CPSeg: Cluster-free Panoptic Segmentation of 3D LiDAR Point Clouds [2.891413712995641]
We propose a novel real-time end-to-end panoptic segmentation network for LiDAR point clouds, called CPSeg.
CPSeg comprises a shared encoder, a dual decoder, a task-aware attention module (TAM) and a cluster-free instance segmentation head.
arXiv Detail & Related papers (2021-11-02T16:44:06Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.