End-to-End Entity Classification on Multimodal Knowledge Graphs
- URL: http://arxiv.org/abs/2003.12383v1
- Date: Wed, 25 Mar 2020 14:57:52 GMT
- Title: End-to-End Entity Classification on Multimodal Knowledge Graphs
- Authors: W.X. Wilcke (1), P. Bloem (1), V. de Boer (1), R.H. van t Veer (2),
F.A.H. van Harmelen (1) ((1) Department of Computer Science Vrije
Universiteit Amsterdam The Netherlands, (2) Geodan Amsterdam The Netherlands)
- Abstract summary: We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our result supports our hypothesis that including information from multiple modalities can help our models obtain a better overall performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End-to-end multimodal learning on knowledge graphs has been left largely
unaddressed. Instead, most end-to-end models such as message passing networks
learn solely from the relational information encoded in graphs' structure: raw
values, or literals, are either omitted completely or are stripped from their
values and treated as regular nodes. In either case we lose potentially
relevant information which could have otherwise been exploited by our learning
methods. To avoid this, we must treat literals and non-literals as separate
cases. We must also address each modality separately and accordingly: numbers,
texts, images, geometries, et cetera. We propose a multimodal message passing
network which not only learns end-to-end from the structure of graphs, but also
from their possibly divers set of multimodal node features. Our model uses
dedicated (neural) encoders to naturally learn embeddings for node features
belonging to five different types of modalities, including images and
geometries, which are projected into a joint representation space together with
their relational information. We demonstrate our model on a node classification
task, and evaluate the effect that each modality has on the overall
performance. Our result supports our hypothesis that including information from
multiple modalities can help our models obtain a better overall performance.
Related papers
- GraphFM: A Scalable Framework for Multi-Graph Pretraining [2.882104808886318]
We introduce a scalable multi-graph multi-task pretraining approach specifically tailored for node classification tasks across diverse graph datasets from different domains.
We demonstrate the efficacy of our approach by training a model on 152 different graph datasets comprising over 7.4 million nodes and 189 million edges.
Our results show that pretraining on a diverse array of real and synthetic graphs improves the model's adaptability and stability, while performing competitively with state-of-the-art specialist models.
arXiv Detail & Related papers (2024-07-16T16:51:43Z) - NativE: Multi-modal Knowledge Graph Completion in the Wild [51.80447197290866]
We propose a comprehensive framework NativE to achieve MMKGC in the wild.
NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities.
We construct a new benchmark called WildKGC with five datasets to evaluate our method.
arXiv Detail & Related papers (2024-03-28T03:04:00Z) - Representation learning in multiplex graphs: Where and how to fuse
information? [5.0235828656754915]
Multiplex graphs possess richer information, provide better modeling capabilities and integrate more detailed data from potentially different sources.
In this paper, we tackle the problem of learning representations for nodes in multiplex networks in an unsupervised or self-supervised manner.
We propose improvements in how to construct GNN architectures that deal with multiplex graphs.
arXiv Detail & Related papers (2024-02-27T21:47:06Z) - End-to-End Learning on Multimodal Knowledge Graphs [0.0]
We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our results indicate that end-to-end multimodal learning from any arbitrary knowledge graph is indeed possible.
arXiv Detail & Related papers (2023-09-03T13:16:18Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - Learning Representations without Compositional Assumptions [79.12273403390311]
We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
arXiv Detail & Related papers (2023-05-31T10:36:10Z) - PersonaSAGE: A Multi-Persona Graph Neural Network [27.680820534771485]
We develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph.
PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings.
Experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction.
arXiv Detail & Related papers (2022-12-28T05:50:38Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Unsupervised Differentiable Multi-aspect Network Embedding [52.981277420394846]
We propose a novel end-to-end framework for multi-aspect network embedding, called asp2vec.
Our proposed framework can be readily extended to heterogeneous networks.
arXiv Detail & Related papers (2020-06-07T19:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.