Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
- URL: http://arxiv.org/abs/2112.10006v1
- Date: Sat, 18 Dec 2021 21:40:50 GMT
- Title: Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
- Authors: Jiaoyan Chen and Yuxia Geng and Zhuo Chen and Jeff Z. Pan and Yuan He
and Wen Zhang and Ian Horrocks and Huajun Chen
- Abstract summary: Machine learning methods often rely on a number of labeled samples for training.
Low-resource learning aims to learn robust prediction models with no enough resources.
Knowledge Graph (KG) is becoming more and more popular for knowledge representation.
- Score: 34.18863318808325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning methods especially deep neural networks have achieved great
success but many of them often rely on a number of labeled samples for
training. In real-world applications, we often need to address sample shortage
due to e.g., dynamic contexts with emerging prediction targets and costly
sample annotation. Therefore, low-resource learning, which aims to learn robust
prediction models with no enough resources (especially training samples), is
now being widely investigated. Among all the low-resource learning studies,
many prefer to utilize some auxiliary information in form of Knowledge Graph
(KG), which is becoming more and more popular for knowledge representation, to
reduce the reliance on labeled samples. In this survey, we very comprehensively
reviewed over $90$ papers about KG-aware research for two major low-resource
learning settings -- zero-shot learning (ZSL) where new classes for prediction
have never appeared in training, and few-shot learning (FSL) where new classes
for prediction have only a small number of labeled samples that are available.
We first introduced the KGs used in ZSL and FSL studies as well as the existing
and potential KG construction solutions, and then systematically categorized
and summarized KG-aware ZSL and FSL methods, dividing them into different
paradigms such as the mapping-based, the data augmentation, the
propagation-based and the optimization-based. We next presented different
applications, including both KG augmented prediction tasks in Computer Vision
and Natural Language Processing but also tasks for KG completion, and some
typical evaluation resources for each task. We eventually discussed some
challenges and future directions on aspects such as new learning and reasoning
paradigms, and the construction of high quality KGs.
Related papers
- Constructing Sample-to-Class Graph for Few-Shot Class-Incremental
Learning [10.111587226277647]
Few-shot class-incremental learning (FSCIL) aims to build machine learning model that can continually learn new concepts from a few data samples.
In this paper, we propose a Sample-to-Class (S2C) graph learning method for FSCIL.
arXiv Detail & Related papers (2023-10-31T08:38:14Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - KGA: A General Machine Unlearning Framework Based on Knowledge Gap
Alignment [51.15802100354848]
We propose a general unlearning framework called KGA to induce forgetfulness.
Experiments on large-scale datasets show that KGA yields comprehensive improvements over baselines.
arXiv Detail & Related papers (2023-05-11T02:44:29Z) - RPLKG: Robust Prompt Learning with Knowledge Graph [11.893917358053004]
We propose a new method, robust prompt learning with knowledge graph (RPLKG)
Based on the knowledge graph, we automatically design diverse interpretable and meaningful prompt sets.
RPLKG shows a significant performance improvement compared to zero-shot learning.
arXiv Detail & Related papers (2023-04-21T08:22:58Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Semi-Supervised and Unsupervised Deep Visual Learning: A Survey [76.2650734930974]
Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data.
We review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective.
arXiv Detail & Related papers (2022-08-24T04:26:21Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - K-ZSL: Resources for Knowledge-driven Zero-shot Learning [19.142028501513362]
External knowledge (a.k.a side information) plays a critical role in zero-shot learning (ZSL)
In this paper, we proposed 5 resources for KG-based research in zero-shot image classification (ZS-IMGC) and zero-shot KG completion (ZS-KGC)
For each resource, we contributed a benchmark and its KG with semantics ranging from text to attributes, from relational knowledge to logical expressions.
arXiv Detail & Related papers (2021-06-29T01:22:49Z) - Self-supervised on Graphs: Contrastive, Generative,or Predictive [25.679620842010422]
Self-supervised learning (SSL) is emerging as a new paradigm for extracting informative knowledge through well-designed pretext tasks.
We divide existing graph SSL methods into three categories: contrastive, generative, and predictive.
We also summarize the commonly used datasets, evaluation metrics, downstream tasks, and open-source implementations of various algorithms.
arXiv Detail & Related papers (2021-05-16T03:30:03Z) - Graph-based Semi-supervised Learning: A Comprehensive Review [51.26862262550445]
Semi-supervised learning (SSL) has tremendous value in practice due to its ability to utilize both labeled data and unlabelled data.
An important class of SSL methods is to naturally represent data as graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.
GSSL methods have demonstrated their advantages in various domains due to their uniqueness of structure, the universality of applications, and their scalability to large scale data.
arXiv Detail & Related papers (2021-02-26T05:11:09Z) - Looking back to lower-level information in few-shot learning [4.873362301533825]
We propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classification accuracy.
Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
arXiv Detail & Related papers (2020-05-27T20:32:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.