Reliability in Semantic Segmentation: Can We Use Synthetic Data?
- URL: http://arxiv.org/abs/2312.09231v1
- Date: Thu, 14 Dec 2023 18:56:07 GMT
- Title: Reliability in Semantic Segmentation: Can We Use Synthetic Data?
- Authors: Thibaut Loiseau, Tuan-Hung Vu, Mickael Chen, Patrick P\'erez and
Matthieu Cord
- Abstract summary: This paper challenges cutting-edge generative models to automatically synthesize data for assessing reliability in semantic segmentation.
By fine-tuning Stable Diffusion, we perform zero-shot generation of synthetic data in OOD domains or inpainted with OOD objects.
We demonstrate a high correlation between the performance on synthetic data and the performance on real OOD data, showing the validity approach.
- Score: 52.5766244206855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assessing the reliability of perception models to covariate shifts and
out-of-distribution (OOD) detection is crucial for safety-critical applications
such as autonomous vehicles. By nature of the task, however, the relevant data
is difficult to collect and annotate. In this paper, we challenge cutting-edge
generative models to automatically synthesize data for assessing reliability in
semantic segmentation. By fine-tuning Stable Diffusion, we perform zero-shot
generation of synthetic data in OOD domains or inpainted with OOD objects.
Synthetic data is employed to provide an initial assessment of pretrained
segmenters, thereby offering insights into their performance when confronted
with real edge cases. Through extensive experiments, we demonstrate a high
correlation between the performance on synthetic data and the performance on
real OOD data, showing the validity approach. Furthermore, we illustrate how
synthetic data can be utilized to enhance the calibration and OOD detection
capabilities of segmenters.
Related papers
- Forte : Finding Outliers with Representation Typicality Estimation [0.14061979259370275]
Generative models can now produce synthetic data which is virtually indistinguishable from the real data used to train it.
Recent work on OOD detection has raised doubts that generative model likelihoods are optimal OOD detectors.
We introduce a novel approach that leverages representation learning, and informative summary statistics based on manifold estimation.
arXiv Detail & Related papers (2024-10-02T08:26:37Z) - Can OOD Object Detectors Learn from Foundation Models? [56.03404530594071]
Out-of-distribution (OOD) object detection is a challenging task due to the absence of open-set OOD data.
Inspired by recent advancements in text-to-image generative models, we study the potential of generative models trained on large-scale open-set data to synthesize OOD samples.
We introduce SyncOOD, a simple data curation method that capitalizes on the capabilities of large foundation models.
arXiv Detail & Related papers (2024-09-08T17:28:22Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Out-of-distribution Detection with Implicit Outlier Transformation [72.73711947366377]
Outlier exposure (OE) is powerful in out-of-distribution (OOD) detection.
We propose a novel OE-based approach that makes the model perform well for unseen OOD situations.
arXiv Detail & Related papers (2023-03-09T04:36:38Z) - Non-Parametric Outlier Synthesis [35.20765580915213]
Out-of-distribution (OOD) detection is indispensable for safely deploying machine learning models in the wild.
We propose a novel framework, Non-Parametric Outlier Synthesis (NPOS), which generates artificial OOD training data.
We show that our synthesis approach can be mathematically interpreted as a rejection sampling framework.
arXiv Detail & Related papers (2023-03-06T08:51:00Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - OODformer: Out-Of-Distribution Detection Transformer [15.17006322500865]
In real-world safety-critical applications, it is important to be aware if a new data point is OOD.
This paper proposes a first-of-its-kind OOD detection architecture named OODformer.
arXiv Detail & Related papers (2021-07-19T15:46:38Z) - Validation of Simulation-Based Testing: Bypassing Domain Shift with
Label-to-Image Synthesis [9.531148049378672]
We propose a novel framework consisting of a generative label-to-image synthesis model together with different transferability measures.
We validate our approach empirically on a semantic segmentation task on driving scenes.
Although the latter can distinguish between real-life and synthetic tests, in the former we observe surprisingly strong correlations of 0.7 for both cars and pedestrians.
arXiv Detail & Related papers (2021-06-10T07:23:58Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.