MEIM: Multi-partition Embedding Interaction Beyond Block Term Format for
Efficient and Expressive Link Prediction
- URL: http://arxiv.org/abs/2209.15597v2
- Date: Tue, 4 Oct 2022 08:22:55 GMT
- Title: MEIM: Multi-partition Embedding Interaction Beyond Block Term Format for
Efficient and Expressive Link Prediction
- Authors: Hung Nghiep Tran, Atsuhiro Takasu
- Abstract summary: We introduce the Multi- Partition Embedding Interaction iMproved beyond block term format (MEIM) model.
MEIM improves expressiveness while still being highly efficient, helping it to outperform strong baselines and achieve state-of-the-art results.
- Score: 3.718476964451589
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graph embedding aims to predict the missing relations between
entities in knowledge graphs. Tensor-decomposition-based models, such as
ComplEx, provide a good trade-off between efficiency and expressiveness, that
is crucial because of the large size of real world knowledge graphs. The recent
multi-partition embedding interaction (MEI) model subsumes these models by
using the block term tensor format and provides a systematic solution for the
trade-off. However, MEI has several drawbacks, some of which carried from its
subsumed tensor-decomposition-based models. In this paper, we address these
drawbacks and introduce the Multi-partition Embedding Interaction iMproved
beyond block term format (MEIM) model, with independent core tensor for
ensemble effects and soft orthogonality for max-rank mapping, in addition to
multi-partition embedding. MEIM improves expressiveness while still being
highly efficient, helping it to outperform strong baselines and achieve
state-of-the-art results on difficult link prediction benchmarks using fairly
small embedding sizes. The source code is released at
https://github.com/tranhungnghiep/MEIM-KGE.
Related papers
- A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - SurgeryV2: Bridging the Gap Between Model Merging and Multi-Task Learning with Deep Representation Surgery [54.866490321241905]
Model merging-based multitask learning (MTL) offers a promising approach for performing MTL by merging multiple expert models.
In this paper, we examine the merged model's representation distribution and uncover a critical issue of "representation bias"
This bias arises from a significant distribution gap between the representations of the merged and expert models, leading to the suboptimal performance of the merged MTL model.
arXiv Detail & Related papers (2024-10-18T11:49:40Z) - Representation Surgery for Multi-Task Model Merging [57.63643005215592]
Multi-task learning (MTL) compresses the information from multiple tasks into a unified backbone to improve computational efficiency and generalization.
Recent work directly merges multiple independently trained models to perform MTL instead of collecting their raw data for joint training.
By visualizing the representation distribution of existing model merging schemes, we find that the merged model often suffers from the dilemma of representation bias.
arXiv Detail & Related papers (2024-02-05T03:39:39Z) - On the Embedding Collapse when Scaling up Recommendation Models [53.66285358088788]
We identify the embedding collapse phenomenon as the inhibition of scalability, wherein the embedding matrix tends to occupy a low-dimensional subspace.
We propose a simple yet effective multi-embedding design incorporating embedding-set-specific interaction modules to learn embedding sets with large diversity.
arXiv Detail & Related papers (2023-10-06T17:50:38Z) - IMKGA-SM: Interpretable Multimodal Knowledge Graph Answer Prediction via
Sequence Modeling [3.867363075280544]
Multimodal knowledge graph link prediction aims to improve the accuracy and efficiency of link prediction tasks for multimodal data.
New model is developed, namely Interpretable Multimodal Knowledge Graph Answer Prediction via Sequence Modeling (IMKGA-SM)
Model achieves much better performance than SOTA baselines on multimodal link prediction datasets of different sizes.
arXiv Detail & Related papers (2023-01-06T10:08:11Z) - Efficient Relation-aware Neighborhood Aggregation in Graph Neural Networks via Tensor Decomposition [4.041834517339835]
We propose a novel knowledge graph that incorporates tensor decomposition within the aggregation function of Graph Conalvolution Network (R-GCN)
Our model enhances the representation of neighboring entities by employing projection matrices of a low-rank tensor defined by relation types.
We adopt a training strategy inspired by contrastive learning to relieve the training limitation of the 1-k-k encoder method inherent in handling vast graphs.
arXiv Detail & Related papers (2022-12-11T19:07:34Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Mitigating Modality Collapse in Multimodal VAEs via Impartial
Optimization [7.4262579052708535]
We argue that this effect is a consequence of conflicting gradients during multimodal VAE training.
We show how to detect the sub-graphs in the computational graphs where gradients conflict.
We empirically show that our framework significantly improves the reconstruction performance, conditional generation, and coherence of the latent space across modalities.
arXiv Detail & Related papers (2022-06-09T13:29:25Z) - Multi-Partition Embedding Interaction with Block Term Format for
Knowledge Graph Completion [3.718476964451589]
Knowledge graph embedding methods perform the task by representing entities and relations as embedding vectors.
Previous work has usually treated each embedding as a whole and has modeled the interactions between these whole embeddings.
We propose the multi- partition embedding interaction (MEI) model with block term format to address this problem.
arXiv Detail & Related papers (2020-06-29T20:37:11Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.