See Yourself in Others: Attending Multiple Tasks for Own Failure
Detection
- URL: http://arxiv.org/abs/2110.02549v1
- Date: Wed, 6 Oct 2021 07:42:57 GMT
- Title: See Yourself in Others: Attending Multiple Tasks for Own Failure
Detection
- Authors: Boyang Sun, Jiaxu Xing, Hermann Blum, Roland Siegwart, Cesar Cadena
- Abstract summary: We propose an attention-based failure detection approach by exploiting the correlations among multiple tasks.
The proposed framework infers task failures by evaluating the individual prediction, across multiple visual perception tasks for different regions in an image.
Our proposed framework generates more accurate estimations of the prediction error for the different task's predictions.
- Score: 28.787334666116518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous robots deal with unexpected scenarios in real environments. Given
input images, various visual perception tasks can be performed, e.g., semantic
segmentation, depth estimation and normal estimation. These different tasks
provide rich information for the whole robotic perception system. All tasks
have their own characteristics while sharing some latent correlations. However,
some of the task predictions may suffer from the unreliability dealing with
complex scenes and anomalies. We propose an attention-based failure detection
approach by exploiting the correlations among multiple tasks. The proposed
framework infers task failures by evaluating the individual prediction, across
multiple visual perception tasks for different regions in an image. The
formulation of the evaluations is based on an attention network supervised by
multi-task uncertainty estimation and their corresponding prediction errors.
Our proposed framework generates more accurate estimations of the prediction
error for the different task's predictions.
Related papers
- Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for
Specialized Tasks [19.945932368701722]
This paper conducts a comprehensive evaluation of numerous uncertainty estimators across diverse tasks on ImageNet.
We find that, despite promising theoretical endeavors, disentanglement is not yet achieved in practice.
We reveal which uncertainty estimators excel at which specific tasks, providing insights for practitioners.
arXiv Detail & Related papers (2024-02-29T18:52:56Z) - A Dynamic Feature Interaction Framework for Multi-task Visual Perception [100.98434079696268]
We devise an efficient unified framework to solve multiple common perception tasks.
These tasks include instance segmentation, semantic segmentation, monocular 3D detection, and depth estimation.
Our proposed framework, termed D2BNet, demonstrates a unique approach to parameter-efficient predictions for multi-task perception.
arXiv Detail & Related papers (2023-06-08T09:24:46Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Detecting Adversarial Perturbations in Multi-Task Perception [32.9951531295576]
We propose a novel adversarial perturbation detection scheme based on multi-task perception of complex vision tasks.
adversarial perturbations are detected by inconsistencies between extracted edges of the input image, the depth output, and the segmentation output.
We show that under an assumption of a 5% false positive rate up to 100% of images are correctly detected as adversarially perturbed, depending on the strength of the perturbation.
arXiv Detail & Related papers (2022-03-02T15:25:17Z) - Self-Supervision by Prediction for Object Discovery in Videos [62.87145010885044]
In this paper, we use the prediction task as self-supervision and build a novel object-centric model for image sequence representation.
Our framework can be trained without the help of any manual annotation or pretrained network.
Initial experiments confirm that the proposed pipeline is a promising step towards object-centric video prediction.
arXiv Detail & Related papers (2021-03-09T19:14:33Z) - A Bayesian Evaluation Framework for Subjectively Annotated Visual
Recognition Tasks [0.0]
We propose a framework for evaluating the uncertainty that comes from the predictor's internal structure.
The framework is successfully applied to four image classification tasks that use subjective human judgements.
arXiv Detail & Related papers (2020-06-20T18:35:33Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.