Robust Learning Through Cross-Task Consistency
- URL: http://arxiv.org/abs/2006.04096v1
- Date: Sun, 7 Jun 2020 09:24:33 GMT
- Title: Robust Learning Through Cross-Task Consistency
- Authors: Amir Zamir, Alexander Sax, Teresa Yeo, O\u{g}uzhan Kar, Nikhil
Cheerla, Rohan Suri, Zhangjie Cao, Jitendra Malik, Leonidas Guibas
- Abstract summary: We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
- Score: 92.42534246652062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual perception entails solving a wide set of tasks, e.g., object
detection, depth estimation, etc. The predictions made for multiple tasks from
the same image are not independent, and therefore, are expected to be
consistent. We propose a broadly applicable and fully computational method for
augmenting learning with Cross-Task Consistency. The proposed formulation is
based on inference-path invariance over a graph of arbitrary tasks. We observe
that learning with cross-task consistency leads to more accurate predictions
and better generalization to out-of-distribution inputs. This framework also
leads to an informative unsupervised quantity, called Consistency Energy, based
on measuring the intrinsic consistency of the system. Consistency Energy
correlates well with the supervised error (r=0.67), thus it can be employed as
an unsupervised confidence metric as well as for detection of
out-of-distribution inputs (ROC-AUC=0.95). The evaluations are performed on
multiple datasets, including Taskonomy, Replica, CocoDoom, and ApolloScape, and
they benchmark cross-task consistency versus various baselines including
conventional multi-task learning, cycle consistency, and analytical
consistency.
Related papers
- Optimizing Multi-Task Learning for Accurate Spacecraft Pose Estimation [0.0]
This paper explores the impact of different tasks within a multi-task learning framework for satellite pose estimation using monocular images.
By integrating tasks such as direct pose estimation, keypoint prediction, object localization, and segmentation into a single network, the study aims to evaluate the reciprocal influence between tasks.
arXiv Detail & Related papers (2024-10-16T15:44:15Z) - Exploring Correlations of Self-Supervised Tasks for Graphs [6.977921096191354]
This paper aims to provide a fresh understanding of graph self-supervised learning based on task correlations.
We evaluate the performance of the representations trained by one specific task on other tasks and define correlation values to quantify task correlations.
We propose Graph Task Correlation Modeling (GraphTCM) to illustrate the task correlations and utilize it to enhance graph self-supervised training.
arXiv Detail & Related papers (2024-05-07T12:02:23Z) - Distributed Continual Learning with CoCoA in High-dimensional Linear
Regression [0.0]
We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time.
In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially.
We consider the well-established distributed learning algorithm COCOA, which distributes the model parameters and the corresponding features over the network.
arXiv Detail & Related papers (2023-12-04T10:35:46Z) - Towards Distribution-Agnostic Generalized Category Discovery [51.52673017664908]
Data imbalance and open-ended distribution are intrinsic characteristics of the real visual world.
We propose a Self-Balanced Co-Advice contrastive framework (BaCon)
BaCon consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task.
arXiv Detail & Related papers (2023-10-02T17:39:58Z) - Exposing and Addressing Cross-Task Inconsistency in Unified
Vision-Language Models [80.23791222509644]
Inconsistent AI models are considered brittle and untrustworthy by human users.
We find that state-of-the-art vision-language models suffer from a surprisingly high degree of inconsistent behavior across tasks.
We propose a rank correlation-based auxiliary training objective, computed over large automatically created cross-task contrast sets.
arXiv Detail & Related papers (2023-03-28T16:57:12Z) - Composite Learning for Robust and Effective Dense Predictions [81.2055761433725]
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
arXiv Detail & Related papers (2022-10-13T17:59:16Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - See Yourself in Others: Attending Multiple Tasks for Own Failure
Detection [28.787334666116518]
We propose an attention-based failure detection approach by exploiting the correlations among multiple tasks.
The proposed framework infers task failures by evaluating the individual prediction, across multiple visual perception tasks for different regions in an image.
Our proposed framework generates more accurate estimations of the prediction error for the different task's predictions.
arXiv Detail & Related papers (2021-10-06T07:42:57Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.