Unsupervised Domain Adaptation through Iterative Consensus Shift in a
Multi-Task Graph
- URL: http://arxiv.org/abs/2103.14417v1
- Date: Fri, 26 Mar 2021 11:57:42 GMT
- Title: Unsupervised Domain Adaptation through Iterative Consensus Shift in a
Multi-Task Graph
- Authors: Emanuela Haller, Elena Burceanu, Marius Leordeanu
- Abstract summary: Babies learn with very little supervision by observing the surrounding world.
Our proposed multi-task graph, with consensus shift learning, relies only on pseudo-labels provided by expert models.
We validate our key contributions experimentally and demonstrate strong performance on the Replica dataset, superior to the very few published methods on multi-task learning with minimal supervision.
- Score: 22.308239339243272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Babies learn with very little supervision by observing the surrounding world.
They synchronize the feedback from all their senses and learn to maintain
consistency and stability among their internal states. Such observations
inspired recent works in multi-task and multi-modal learning, but existing
methods rely on expensive manual supervision. In contrast, our proposed
multi-task graph, with consensus shift learning, relies only on pseudo-labels
provided by expert models. In our graph, every node represents a task, and
every edge learns to transform one input node into another. Once initialized,
the graph learns by itself on virtually any novel target domain. An adaptive
selection mechanism finds consensus among multiple paths reaching a given node
and establishes the pseudo-ground truth at that node. Such pseudo-labels, given
by ensemble pathways in the graph, are used during the next learning iteration
when single edges distill this distributed knowledge. We validate our key
contributions experimentally and demonstrate strong performance on the Replica
dataset, superior to the very few published methods on multi-task learning with
minimal supervision.
Related papers
- Meta-GPS++: Enhancing Graph Meta-Learning with Contrastive Learning and Self-Training [22.473322546354414]
We propose a novel framework for few-shot node classification called Meta-GPS++.
We first adopt an efficient method to learn discriminative node representations on homophilic and heterophilic graphs.
We also apply self-training to extract valuable information from unlabeled nodes.
arXiv Detail & Related papers (2024-07-20T03:05:12Z) - Multi-Task Hypergraphs for Semi-supervised Learning using Earth
Observations [51.344339837501835]
We introduce a powerful multi-task hypergraph, in which every node is a task and different paths through the hypergraph reaching a given task become unsupervised teachers.
We apply our model to one of the most important problems of our times, that of Earth Observation, which is highly multi-task and it often suffers from missing ground-truth data.
We show that the hypergraph can adapt unsupervised to gradual data distribution shifts and reliably recover, through its multi-task self-supervision process.
arXiv Detail & Related papers (2023-08-21T20:22:51Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - STERLING: Synergistic Representation Learning on Bipartite Graphs [78.86064828220613]
A fundamental challenge of bipartite graph representation learning is how to extract node embeddings.
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
We introduce a novel synergistic representation learning model (STERLING) to learn node embeddings without negative node pairs.
arXiv Detail & Related papers (2023-01-25T03:21:42Z) - Transductive Linear Probing: A Novel Framework for Few-Shot Node
Classification [56.17097897754628]
We show that transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol.
We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
arXiv Detail & Related papers (2022-12-11T21:10:34Z) - Deep Contrastive Learning for Multi-View Network Embedding [20.035449838566503]
Multi-view network embedding aims at projecting nodes in the network to low-dimensional vectors.
Most contrastive learning-based methods mostly rely on high-quality graph embedding.
We design a novel node-to-node Contrastive learning framework for Multi-view network Embedding (CREME)
arXiv Detail & Related papers (2021-08-16T06:29:18Z) - ROD: Reception-aware Online Distillation for Sparse Graphs [23.55530524584572]
We propose ROD, a novel reception-aware online knowledge distillation approach for sparse graph learning.
We design three supervision signals for ROD: multi-scale reception-aware graph knowledge, task-based supervision, and rich distilled knowledge.
Our approach has been extensively evaluated on 9 datasets and a variety of graph-based tasks.
arXiv Detail & Related papers (2021-07-25T11:55:47Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z) - Semi-Supervised Learning for Multi-Task Scene Understanding by Neural
Graph Consensus [23.528834793031894]
We address the problem of semi-supervised learning in the context of multiple visual interpretations of the world.
We show how prediction of different representations could be effectively learned through self-supervised consensus in our graph.
We also compare to state-of-the-art methods for multi-task and semi-supervised learning and show superior performance.
arXiv Detail & Related papers (2020-10-02T16:30:49Z) - Unsupervised Differentiable Multi-aspect Network Embedding [52.981277420394846]
We propose a novel end-to-end framework for multi-aspect network embedding, called asp2vec.
Our proposed framework can be readily extended to heterogeneous networks.
arXiv Detail & Related papers (2020-06-07T19:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.