Coarse-to-fine Knowledge Graph Domain Adaptation based on
Distantly-supervised Iterative Training
- URL: http://arxiv.org/abs/2211.02849v1
- Date: Sat, 5 Nov 2022 08:16:38 GMT
- Title: Coarse-to-fine Knowledge Graph Domain Adaptation based on
Distantly-supervised Iterative Training
- Authors: Homgmin Cai, Wenxiong Liao, Zhengliang Liu, Xiaoke Huang, Yiyang
Zhang, Siqi Ding, Sheng Li, Quanzheng Li, Tianming Liu, Xiang Li
- Abstract summary: We propose an integrated framework for adapting and re-learning knowledge graphs.
No manual data annotation is required to train the model.
We introduce a novel iterative training strategy to facilitate the discovery of domain-specific named entities and triples.
- Score: 12.62127290494378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern supervised learning neural network models require a large amount of
manually labeled data, which makes the construction of domain-specific
knowledge graphs time-consuming and labor-intensive. In parallel, although
there has been much research on named entity recognition and relation
extraction based on distantly supervised learning, constructing a
domain-specific knowledge graph from large collections of textual data without
manual annotations is still an urgent problem to be solved. In response, we
propose an integrated framework for adapting and re-learning knowledge graphs
from one coarse domain (biomedical) to a finer-define domain (oncology). In
this framework, we apply distant-supervision on cross-domain knowledge graph
adaptation. Consequently, no manual data annotation is required to train the
model. We introduce a novel iterative training strategy to facilitate the
discovery of domain-specific named entities and triples. Experimental results
indicate that the proposed framework can perform domain adaptation and
construction of knowledge graph efficiently.
Related papers
- Not all tickets are equal and we know it: Guiding pruning with
domain-specific knowledge [26.950765295157897]
We propose DASH, which guides pruning by available domain-specific structural information.
In the context of learning dynamic gene regulatory network models, we show that DASH combined with existing general knowledge on interaction partners provides data-specific insights aligned with biology.
arXiv Detail & Related papers (2024-03-05T23:02:55Z) - Exploring In-Context Learning Capabilities of Foundation Models for
Generating Knowledge Graphs from Text [3.114960935006655]
This paper aims to improve the state of the art of automatic construction and completion of knowledge graphs from text.
In this context, one emerging paradigm is in-context learning where a language model is used as it is with a prompt.
arXiv Detail & Related papers (2023-05-15T17:10:19Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - Knowledge Graph Anchored Information-Extraction for Domain-Specific
Insights [1.6308268213252761]
We use a task-based approach for fulfilling specific information needs within a new domain.
A pipeline constructed of state of the art NLP technologies is used to automatically extract an instance level semantic structure.
arXiv Detail & Related papers (2021-04-18T19:28:10Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Adversarial-Prediction Guided Multi-task Adaptation for Semantic
Segmentation of Electron Microscopy Images [5.027571997864707]
We introduce an adversarial-prediction guided multi-task network to learn the adaptation of a well-trained model for use on a novel unlabeled target domain.
Since no label is available on target domain, we learn an encoding representation not only for the supervised segmentation on source domain but also for unsupervised reconstruction of the target data.
arXiv Detail & Related papers (2020-04-05T09:18:11Z) - Learning Cross-domain Generalizable Features by Representation
Disentanglement [11.74643883335152]
Deep learning models exhibit limited generalizability across different domains.
We propose Mutual-Information-based Disentangled Neural Networks (MIDNet) to extract generalizable features that enable transferring knowledge to unseen categorical features in target domains.
We demonstrate our method on handwritten digits datasets and a fetal ultrasound dataset for image classification tasks.
arXiv Detail & Related papers (2020-02-29T17:53:16Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.