Multi-Task Hypergraphs for Semi-supervised Learning using Earth
Observations
- URL: http://arxiv.org/abs/2308.11021v1
- Date: Mon, 21 Aug 2023 20:22:51 GMT
- Title: Multi-Task Hypergraphs for Semi-supervised Learning using Earth
Observations
- Authors: Mihai Pirvu, Alina Marcu, Alexandra Dobrescu, Nabil Belbachir, Marius
Leordeanu
- Abstract summary: We introduce a powerful multi-task hypergraph, in which every node is a task and different paths through the hypergraph reaching a given task become unsupervised teachers.
We apply our model to one of the most important problems of our times, that of Earth Observation, which is highly multi-task and it often suffers from missing ground-truth data.
We show that the hypergraph can adapt unsupervised to gradual data distribution shifts and reliably recover, through its multi-task self-supervision process.
- Score: 51.344339837501835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are many ways of interpreting the world and they are highly
interdependent. We exploit such complex dependencies and introduce a powerful
multi-task hypergraph, in which every node is a task and different paths
through the hypergraph reaching a given task become unsupervised teachers, by
forming ensembles that learn to generate reliable pseudolabels for that task.
Each hyperedge is part of an ensemble teacher for a given task and it is also a
student of the self-supervised hypergraph system. We apply our model to one of
the most important problems of our times, that of Earth Observation, which is
highly multi-task and it often suffers from missing ground-truth data. By
performing extensive experiments on the NASA NEO Dataset, spanning a period of
22 years, we demonstrate the value of our multi-task semi-supervised approach,
by consistent improvements over strong baselines and recent work. We also show
that the hypergraph can adapt unsupervised to gradual data distribution shifts
and reliably recover, through its multi-task self-supervision process, the
missing data for several observational layers for up to seven years.
Related papers
- Joint-Task Regularization for Partially Labeled Multi-Task Learning [30.823282043129552]
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
We propose Joint-Task Regularization (JTR), an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space.
arXiv Detail & Related papers (2024-04-02T14:16:59Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Self-supervised Hypergraphs for Learning Multiple World Interpretations [16.83248115598725]
We present a method for learning multiple scene representations given a small labeled set, by exploiting the relationships between such representations in the form of a multi-task hypergraph.
We show how we can use the hypergraph to improve a powerful pretrained VisTransformer model without any additional labeled data.
We also introduce Dronescapes, a large video dataset captured with UAVs in different complex real-world scenes, with multiple representations, suitable for multi-task learning.
arXiv Detail & Related papers (2023-08-15T07:51:53Z) - Unsupervised Task Graph Generation from Instructional Video Transcripts [53.54435048879365]
We consider a setting where text transcripts of instructional videos performing a real-world activity are provided.
The goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps.
We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components.
arXiv Detail & Related papers (2023-02-17T22:50:08Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Understanding the World Through Action [91.3755431537592]
I will argue that a general, principled, and powerful framework for utilizing unlabeled data can be derived from reinforcement learning.
I will discuss how such a procedure is more closely aligned with potential downstream tasks.
arXiv Detail & Related papers (2021-10-24T22:33:52Z) - Unsupervised Domain Adaptation through Iterative Consensus Shift in a
Multi-Task Graph [22.308239339243272]
Babies learn with very little supervision by observing the surrounding world.
Our proposed multi-task graph, with consensus shift learning, relies only on pseudo-labels provided by expert models.
We validate our key contributions experimentally and demonstrate strong performance on the Replica dataset, superior to the very few published methods on multi-task learning with minimal supervision.
arXiv Detail & Related papers (2021-03-26T11:57:42Z) - HyperGrid: Efficient Multi-Task Transformers with Grid-wise Decomposable
Hyper Projections [96.64246471034195]
We propose textscHyperGrid, a new approach for highly effective multi-task learning.
Our method helps bridge the gap between fine-tuning and multi-task learning approaches.
arXiv Detail & Related papers (2020-07-12T02:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.