Prompted Meta-Learning for Few-shot Knowledge Graph Completion
- URL: http://arxiv.org/abs/2505.05684v1
- Date: Thu, 08 May 2025 22:59:42 GMT
- Title: Prompted Meta-Learning for Few-shot Knowledge Graph Completion
- Authors: Han Wu, Jie Yin,
- Abstract summary: Few-shot knowledge graph completion (KGC) has obtained significant attention due to its practical applications in real-world scenarios.<n>We propose a novel prompted meta-learning framework that seamlessly integrates meta-semantics with relational information for few-shot KGC.<n>PrompMeta has two key innovations: (1) a meta-semantic prompt pool that captures and consolidates high-level meta-semantics, enabling effective knowledge transfer and adaptation to rare and newly emerging relations.
- Score: 11.880512693272367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot knowledge graph completion (KGC) has obtained significant attention due to its practical applications in real-world scenarios, where new knowledge often emerges with limited available data. While most existing methods for few-shot KGC have predominantly focused on leveraging relational information, rich semantics inherent in KGs have been largely overlooked. To address this gap, we propose a novel prompted meta-learning (PromptMeta) framework that seamlessly integrates meta-semantics with relational information for few-shot KGC. PrompMeta has two key innovations: (1) a meta-semantic prompt pool that captures and consolidates high-level meta-semantics, enabling effective knowledge transfer and adaptation to rare and newly emerging relations. (2) a learnable fusion prompt that dynamically combines meta-semantic information with task-specific relational information tailored to different few-shot tasks. Both components are optimized together with model parameters within a meta-learning framework. Extensive experiments on two benchmark datasets demonstrate the effectiveness of our approach.
Related papers
- NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - Noise-powered Multi-modal Knowledge Graph Representation Framework [52.95468915728721]
The rise of Multi-modal Pre-training highlights the necessity for a unified Multi-Modal Knowledge Graph representation learning framework.<n>We propose a novel SNAG method that utilizes a Transformer-based architecture equipped with modality-level noise masking.<n>Our approach achieves SOTA performance across a total of ten datasets, demonstrating its versatility.
arXiv Detail & Related papers (2024-03-11T15:48:43Z) - Meta-Path Learning for Multi-relational Graph Neural Networks [14.422104525197838]
We propose a novel approach to learn meta-paths and meta-path GNNs that are highly accurate based on a small number of informative meta-paths.
Our experimental evaluation shows that the approach manages to correctly identify relevant meta-paths even with a large number of relations.
arXiv Detail & Related papers (2023-09-29T10:12:30Z) - DAC-MR: Data Augmentation Consistency Based Meta-Regularization for
Meta-Learning [55.733193075728096]
We propose a meta-knowledge informed meta-learning (MKIML) framework to improve meta-learning.
We preliminarily integrate meta-knowledge into meta-objective via using an appropriate meta-regularization (MR) objective.
The proposed DAC-MR is hopeful to learn well-performing meta-models from training tasks with noisy, sparse or unavailable meta-data.
arXiv Detail & Related papers (2023-05-13T11:01:47Z) - A Unified Framework with Meta-dropout for Few-shot Learning [25.55782263169028]
In this paper, we utilize the idea of meta-learning to explain two very different streams of few-shot learning.
We propose a simple yet effective strategy named meta-dropout, which is applied to the transferable knowledge generalized from base categories to novel categories.
arXiv Detail & Related papers (2022-10-12T17:05:06Z) - MetaKG: Meta-learning on Knowledge Graph for Cold-start Recommendation [20.650193619161104]
A knowledge graph (KG) consists of a set of interconnected typed entities and their attributes.
Inspired by the success of meta-learning on scarce training samples, we propose a novel framework called MetaKG.
arXiv Detail & Related papers (2022-02-08T13:31:14Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Towards Effective Context for Meta-Reinforcement Learning: an Approach
based on Contrastive Learning [33.19862944149082]
We propose a novel Meta-RL framework called CCM (Contrastive learning augmented Context-based Meta-RL)
We first focus on the contrastive nature behind different tasks and leverage it to train a compact and sufficient context encoder.
We derive a new information-gain-based objective which aims to collect informative trajectories in a few steps.
arXiv Detail & Related papers (2020-09-29T09:29:18Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.