Efficient Multi-task Uncertainties for Joint Semantic Segmentation and
Monocular Depth Estimation
- URL: http://arxiv.org/abs/2402.10580v1
- Date: Fri, 16 Feb 2024 11:09:16 GMT
- Title: Efficient Multi-task Uncertainties for Joint Semantic Segmentation and
Monocular Depth Estimation
- Authors: Steven Landgraf, Markus Hillemann, Theodor Kapler, Markus Ulrich
- Abstract summary: Many real-world applications are multi-modal in nature and hence benefit from multi-task learning.
In autonomous driving, for example, the joint solution of semantic segmentation and monocular depth estimation has proven to be valuable.
We introduce EMUFormer, a novel student-teacher distillation approach for joint semantic segmentation and monocular depth estimation.
- Score: 10.220692937750295
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantifying the predictive uncertainty emerged as a possible solution to
common challenges like overconfidence or lack of explainability and robustness
of deep neural networks, albeit one that is often computationally expensive.
Many real-world applications are multi-modal in nature and hence benefit from
multi-task learning. In autonomous driving, for example, the joint solution of
semantic segmentation and monocular depth estimation has proven to be valuable.
In this work, we first combine different uncertainty quantification methods
with joint semantic segmentation and monocular depth estimation and evaluate
how they perform in comparison to each other. Additionally, we reveal the
benefits of multi-task learning with regard to the uncertainty quality compared
to solving both tasks separately. Based on these insights, we introduce
EMUFormer, a novel student-teacher distillation approach for joint semantic
segmentation and monocular depth estimation as well as efficient multi-task
uncertainty quantification. By implicitly leveraging the predictive
uncertainties of the teacher, EMUFormer achieves new state-of-the-art results
on Cityscapes and NYUv2 and additionally estimates high-quality predictive
uncertainties for both tasks that are comparable or superior to a Deep Ensemble
despite being an order of magnitude more efficient.
Related papers
- A Comparative Study on Multi-task Uncertainty Quantification in Semantic Segmentation and Monocular Depth Estimation [9.52671061354338]
We evaluate Monte Carlo Dropout, Deep Sub-Ensembles, and Deep Ensembles for joint semantic segmentation and monocular depth estimation.
Deep Ensembles stand out as the preferred choice, particularly in out-of-domain scenarios.
We highlight the impact of employing different uncertainty thresholds to classify pixels as certain or uncertain.
arXiv Detail & Related papers (2024-05-27T12:12:26Z) - Uncertainty quantification for deeponets with ensemble kalman inversion [0.8158530638728501]
In this work, we propose a novel inference approach for efficient uncertainty quantification (UQ) for operator learning by harnessing the power of the Ensemble Kalman Inversion (EKI) approach.
EKI is known for its derivative-free, noise-robust, and highly parallelizable feature, and has demonstrated its advantages for UQ for physics-informed neural networks.
We deploy a mini-batch variant of EKI to accommodate larger datasets, mitigating the computational demand due to large datasets during the training stage.
arXiv Detail & Related papers (2024-03-06T04:02:30Z) - Sharing Knowledge in Multi-Task Deep Reinforcement Learning [57.38874587065694]
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.
We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks.
arXiv Detail & Related papers (2024-01-17T19:31:21Z) - Diversified Ensemble of Independent Sub-Networks for Robust
Self-Supervised Representation Learning [10.784911682565879]
Ensembling a neural network is a widely recognized approach to enhance model performance, estimate uncertainty, and improve robustness in deep supervised learning.
We present a novel self-supervised training regime that leverages an ensemble of independent sub-networks.
Our method efficiently builds a sub-model ensemble with high diversity, leading to well-calibrated estimates of model uncertainty.
arXiv Detail & Related papers (2023-08-28T16:58:44Z) - Contrastive Multi-Task Dense Prediction [11.227696986100447]
A core objective in design is how to effectively model cross-task interactions to achieve a comprehensive improvement on different tasks.
We introduce feature-wise contrastive consistency into modeling the cross-task interactions for multi-task dense prediction.
We propose a novel multi-task contrastive regularization method based on the consistency to effectively boost the representation learning of the different sub-tasks.
arXiv Detail & Related papers (2023-07-16T03:54:01Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Adversarial Robustness with Semi-Infinite Constrained Learning [177.42714838799924]
Deep learning to inputs perturbations has raised serious questions about its use in safety-critical domains.
We propose a hybrid Langevin Monte Carlo training approach to mitigate this issue.
We show that our approach can mitigate the trade-off between state-of-the-art performance and robust robustness.
arXiv Detail & Related papers (2021-10-29T13:30:42Z) - Multi-Loss Sub-Ensembles for Accurate Classification with Uncertainty
Estimation [1.2891210250935146]
We propose an efficient method for uncertainty estimation in deep neural networks (DNNs) achieving high accuracy.
We keep our inference time relatively low by leveraging the advantage proposed by the Deep-Sub-Ensembles method.
Our results show improved accuracy on the classification task and competitive results on several uncertainty measures.
arXiv Detail & Related papers (2020-10-05T10:59:11Z) - On the uncertainty of self-supervised monocular depth estimation [52.13311094743952]
Self-supervised paradigms for monocular depth estimation are very appealing since they do not require ground truth annotations at all.
We explore for the first time how to estimate the uncertainty for this task and how this affects depth accuracy.
We propose a novel peculiar technique specifically designed for self-supervised approaches.
arXiv Detail & Related papers (2020-05-13T09:00:55Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.