Safety Metrics for Semantic Segmentation in Autonomous Driving
- URL: http://arxiv.org/abs/2105.10142v1
- Date: Fri, 21 May 2021 05:59:49 GMT
- Title: Safety Metrics for Semantic Segmentation in Autonomous Driving
- Authors: Chih-Hong Cheng, Alois Knoll, Hsuan-Cheng Liao
- Abstract summary: In this paper, we consider safety-aware correctness and robustness metrics specialized for semantic segmentation.
The novelty of our proposal is to move beyond pixel-level metrics: Given two images with each having N pixels being class-flipped, the designed metrics should reflect a different level of safety criticality.
The result evaluated on an autonomous driving dataset demonstrates the validity and practicality of our proposed methodology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Within the context of autonomous driving, safety-related metrics for deep
neural networks have been widely studied for image classification and object
detection. In this paper, we further consider safety-aware correctness and
robustness metrics specialized for semantic segmentation. The novelty of our
proposal is to move beyond pixel-level metrics: Given two images with each
having N pixels being class-flipped, the designed metrics should, depending on
the clustering of pixels being class-flipped or the location of occurrence,
reflect a different level of safety criticality. The result evaluated on an
autonomous driving dataset demonstrates the validity and practicality of our
proposed methodology.
Related papers
- Segmentation Re-thinking Uncertainty Estimation Metrics for Semantic Segmentation [12.532289778772185]
semantic segmentation is a fundamental application within machine learning.
The metric known as PAvPU (Patch Accuracy versus Patch Uncertainty) has been developed as a specialized tool for evaluating entropy-based uncertainty in image segmentation tasks.
Our investigation identifies three core deficiencies within the PAvPU framework and proposes robust solutions.
arXiv Detail & Related papers (2024-03-28T20:34:02Z) - Pixel-Level Clustering Network for Unsupervised Image Segmentation [3.69853388955692]
We present a pixel-level clustering framework for segmenting images into regions without using ground truth annotations.
We also propose a training strategy that utilizes intra-consistency within each superpixel, inter-similarity/dissimilarity between neighboring superpixels, and structural similarity between images.
arXiv Detail & Related papers (2023-10-24T23:06:29Z) - Introspective Deep Metric Learning [91.47907685364036]
We propose an introspective deep metric learning framework for uncertainty-aware comparisons of images.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling.
arXiv Detail & Related papers (2023-09-11T16:21:13Z) - NP-SemiSeg: When Neural Processes meet Semi-Supervised Semantic
Segmentation [87.50830107535533]
Semi-supervised semantic segmentation involves assigning pixel-wise labels to unlabeled images at training time.
Current approaches to semi-supervised semantic segmentation work by predicting pseudo-labels for each pixel from a class-wise probability distribution output by a model.
In this work, we move one step forward by adapting NPs to semi-supervised semantic segmentation, resulting in a new model called NP-SemiSeg.
arXiv Detail & Related papers (2023-08-05T12:42:15Z) - Instance Segmentation with Cross-Modal Consistency [13.524441194366544]
We introduce a novel approach to instance segmentation that jointly leverages measurements from multiple sensor modalities.
Our technique applies contrastive learning to points in the scene both across sensor modalities and the temporal domain.
We demonstrate that this formulation encourages the models to learn embeddings that are invariant to viewpoint variations.
arXiv Detail & Related papers (2022-10-14T21:17:19Z) - Introspective Deep Metric Learning for Image Retrieval [80.29866561553483]
We argue that a good similarity model should consider the semantic discrepancies with caution to better deal with ambiguous images for more robust training.
We propose to represent an image using not only a semantic embedding but also an accompanying uncertainty embedding, which describes the semantic characteristics and ambiguity of an image, respectively.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling and attains state-of-the-art results on the widely used CUB-200-2011, Cars196, and Stanford Online Products datasets.
arXiv Detail & Related papers (2022-05-09T17:51:44Z) - Uncertainty Aware Proposal Segmentation for Unknown Object Detection [13.249453757295083]
This paper proposes to exploit additional predictions of semantic segmentation models and quantifying its confidences.
We use object proposals generated by Region Proposal Network (RPN) and adapt distance aware uncertainty estimation of semantic segmentation.
The augmented object proposals are then used to train a classifier for known vs. unknown objects categories.
arXiv Detail & Related papers (2021-11-25T01:53:05Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation [111.61261419566908]
Deep neural networks (DNNs) are usually trained on a closed set of semantic classes.
They are ill-equipped to handle previously-unseen objects.
detecting and localizing such objects is crucial for safety-critical applications such as perception for automated driving.
arXiv Detail & Related papers (2021-04-30T07:58:19Z) - Self-supervised Segmentation via Background Inpainting [96.10971980098196]
We introduce a self-supervised detection and segmentation approach that can work with single images captured by a potentially moving camera.
We exploit a self-supervised loss function that we exploit to train a proposal-based segmentation network.
We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks and outperform existing self-supervised methods.
arXiv Detail & Related papers (2020-11-11T08:34:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.