End-to-End Learning on Multimodal Knowledge Graphs
- URL: http://arxiv.org/abs/2309.01169v1
- Date: Sun, 3 Sep 2023 13:16:18 GMT
- Title: End-to-End Learning on Multimodal Knowledge Graphs
- Authors: W. X. Wilcke, P. Bloem, V. de Boer, R. H. van t Veer
- Abstract summary: We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our results indicate that end-to-end multimodal learning from any arbitrary knowledge graph is indeed possible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Knowledge graphs enable data scientists to learn end-to-end on heterogeneous
knowledge. However, most end-to-end models solely learn from the relational
information encoded in graphs' structure: raw values, encoded as literal nodes,
are either omitted completely or treated as regular nodes without consideration
for their values. In either case we lose potentially relevant information which
could have otherwise been exploited by our learning methods. We propose a
multimodal message passing network which not only learns end-to-end from the
structure of graphs, but also from their possibly divers set of multimodal node
features. Our model uses dedicated (neural) encoders to naturally learn
embeddings for node features belonging to five different types of modalities,
including numbers, texts, dates, images and geometries, which are projected
into a joint representation space together with their relational information.
We implement and demonstrate our model on node classification and link
prediction for artificial and real-worlds datasets, and evaluate the effect
that each modality has on the overall performance in an inverse ablation study.
Our results indicate that end-to-end multimodal learning from any arbitrary
knowledge graph is indeed possible, and that including multimodal information
can significantly affect performance, but that much depends on the
characteristics of the data.
Related papers
- Representation learning in multiplex graphs: Where and how to fuse
information? [5.0235828656754915]
Multiplex graphs possess richer information, provide better modeling capabilities and integrate more detailed data from potentially different sources.
In this paper, we tackle the problem of learning representations for nodes in multiplex networks in an unsupervised or self-supervised manner.
We propose improvements in how to construct GNN architectures that deal with multiplex graphs.
arXiv Detail & Related papers (2024-02-27T21:47:06Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Learning Representations without Compositional Assumptions [79.12273403390311]
We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
arXiv Detail & Related papers (2023-05-31T10:36:10Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - PersonaSAGE: A Multi-Persona Graph Neural Network [27.680820534771485]
We develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph.
PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings.
Experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction.
arXiv Detail & Related papers (2022-12-28T05:50:38Z) - Graph Representation Learning by Ensemble Aggregating Subgraphs via
Mutual Information Maximization [5.419711903307341]
We introduce a self-supervised learning method to enhance the representations of graph-level learned by Graph Neural Networks.
To get a comprehensive understanding of the graph structure, we propose an ensemble-learning like subgraph method.
And to achieve efficient and effective contrasive learning, a Head-Tail contrastive samples construction method is proposed.
arXiv Detail & Related papers (2021-03-24T12:06:12Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - End-to-End Entity Classification on Multimodal Knowledge Graphs [0.0]
We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our result supports our hypothesis that including information from multiple modalities can help our models obtain a better overall performance.
arXiv Detail & Related papers (2020-03-25T14:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.