Few-shot Relational Reasoning via Connection Subgraph Pretraining
- URL: http://arxiv.org/abs/2210.06722v1
- Date: Thu, 13 Oct 2022 04:35:14 GMT
- Title: Few-shot Relational Reasoning via Connection Subgraph Pretraining
- Authors: Qian Huang, Hongyu Ren, Jure Leskovec
- Abstract summary: Connection Subgraph Reasoner (CSR) can make predictions for the target few-shot task directly without the need for pre-training.
Our framework can already perform competitively to existing methods on target few-shot tasks.
- Score: 81.30830261527231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot knowledge graph (KG) completion task aims to perform inductive
reasoning over the KG: given only a few support triplets of a new relation
$\bowtie$ (e.g., (chop,$\bowtie$,kitchen), (read,$\bowtie$,library), the goal
is to predict the query triplets of the same unseen relation $\bowtie$, e.g.,
(sleep,$\bowtie$,?). Current approaches cast the problem in a meta-learning
framework, where the model needs to be first jointly trained over many training
few-shot tasks, each being defined by its own relation, so that
learning/prediction on the target few-shot task can be effective. However, in
real-world KGs, curating many training tasks is a challenging ad hoc process.
Here we propose Connection Subgraph Reasoner (CSR), which can make predictions
for the target few-shot task directly without the need for pre-training on the
human curated set of training tasks. The key to CSR is that we explicitly model
a shared connection subgraph between support and query triplets, as inspired by
the principle of eliminative induction. To adapt to specific KG, we design a
corresponding self-supervised pretraining scheme with the objective of
reconstructing automatically sampled connection subgraphs. Our pretrained model
can then be directly applied to target few-shot tasks on without the need for
training few-shot tasks. Extensive experiments on real KGs, including NELL,
FB15K-237, and ConceptNet, demonstrate the effectiveness of our framework: we
show that even a learning-free implementation of CSR can already perform
competitively to existing methods on target few-shot tasks; with pretraining,
CSR can achieve significant gains of up to 52% on the more challenging
inductive few-shot tasks where the entities are also unseen during
(pre)training.
Related papers
- $α$VIL: Learning to Leverage Auxiliary Tasks for Multitask Learning [3.809702129519642]
Multitask Learning aims to train a range of (usually related) tasks with the help of a shared model.
It becomes important to estimate the positive or negative influence auxiliary tasks will have on the target.
We propose a novel method called $alpha$Variable Learning ($alpha$VIL) that is able to adjust task weights dynamically during model training.
arXiv Detail & Related papers (2024-05-13T14:12:33Z) - Episodic-free Task Selection for Few-shot Learning [2.508902852545462]
We propose a novel meta-training framework beyond episodic training.
episodic tasks are not used directly for training, but for evaluating the effectiveness of some selected episodic-free tasks.
In experiments, the training task set contains some promising types, e. g., contrastive learning and classification.
arXiv Detail & Related papers (2024-01-31T10:52:15Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Self-regulating Prompts: Foundational Model Adaptation without
Forgetting [112.66832145320434]
We introduce a self-regularization framework for prompting called PromptSRC.
PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations.
arXiv Detail & Related papers (2023-07-13T17:59:35Z) - Improving Few-Shot Inductive Learning on Temporal Knowledge Graphs using
Confidence-Augmented Reinforcement Learning [24.338098716004485]
TKGC aims to predict the missing links among the entities in a temporal knwoledge graph (TKG)
Recently, a new task, i.e., TKG few-shot out-of-graph (OOG) link prediction, is proposed.
We propose a TKGC method FITCARL that combines few-shot learning with reinforcement learning to solve this task.
arXiv Detail & Related papers (2023-04-02T20:05:20Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - Pre-training to Match for Unified Low-shot Relation Extraction [37.625078897220305]
Low-shot relation extraction aims to recognize novel relations with very few or even no samples.
Few-shot and zero-shot RE are two representative low-shot RE tasks.
We propose Multi-Choice Matching Networks to unify low-shot relation extraction.
arXiv Detail & Related papers (2022-03-23T08:43:52Z) - MetaICL: Learning to Learn In Context [87.23056864536613]
We introduce MetaICL, a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learn-ing on a large set of training tasks.
We show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task training data, and outperforms much bigger models with nearly 8x parameters.
arXiv Detail & Related papers (2021-10-29T17:42:08Z) - Video Moment Retrieval via Natural Language Queries [7.611718124254329]
We propose a novel method for video moment retrieval (VMR) that achieves state of the arts (SOTA) performance on R@1 metrics.
Our model has a simple architecture, which enables faster training and inference while maintaining.
arXiv Detail & Related papers (2020-09-04T22:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.