Lorentz Equivariant Model for Knowledge-Enhanced Hyperbolic
Collaborative Filtering
- URL: http://arxiv.org/abs/2302.04545v2
- Date: Sat, 18 Mar 2023 02:10:10 GMT
- Title: Lorentz Equivariant Model for Knowledge-Enhanced Hyperbolic
Collaborative Filtering
- Authors: Bosong Huang, Weihao Yu, Ruzhong Xie, Jing Xiao, Jin Huang
- Abstract summary: We introduce prior auxiliary information from the knowledge graph (KG) to assist the user-item graph.
We propose a rigorously Lorentz group equivariant knowledge-enhanced collaborative filtering model (LECF)
We show that LECF remarkably outperforms state-of-the-art methods.
- Score: 19.57064597050846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Introducing prior auxiliary information from the knowledge graph (KG) to
assist the user-item graph can improve the comprehensive performance of the
recommender system. Many recent studies show that the ensemble properties of
hyperbolic spaces fit the scale-free and hierarchical characteristics exhibited
in the above two types of graphs well. However, existing hyperbolic methods
ignore the consideration of equivariance, thus they cannot generalize symmetric
features under given transformations, which seriously limits the capability of
the model. Moreover, they cannot balance preserving the heterogeneity and
mining the high-order entity information to users across two graphs. To fill
these gaps, we propose a rigorously Lorentz group equivariant
knowledge-enhanced collaborative filtering model (LECF). Innovatively, we
jointly update the attribute embeddings (containing the high-order entity
signals from the KG) and hyperbolic embeddings (the distance between hyperbolic
embeddings reveals the recommendation tendency) by the LECF layer with Lorentz
Equivariant Transformation. Moreover, we propose Hyperbolic Sparse Attention
Mechanism to sample the most informative neighbor nodes. Lorentz equivariance
is strictly maintained throughout the entire model, and enforcing equivariance
is proven necessary experimentally. Extensive experiments on three real-world
benchmarks demonstrate that LECF remarkably outperforms state-of-the-art
methods.
Related papers
- Fully Hyperbolic Rotation for Knowledge Graph Embedding [12.69417276887153]
We propose a novel fully hyperbolic model designed for knowledge graph embedding.
Our model considers each relation in knowledge graphs as a Lorentz rotation from the head entity to the tail entity.
Our model achieves competitive results with fewer parameters.
arXiv Detail & Related papers (2024-11-06T02:41:26Z) - Disentanglement with Factor Quantized Variational Autoencoders [11.086500036180222]
We propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors are not provided to the model.
We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement.
Our method called FactorQVAE is the first method that combines optimization based disentanglement approaches with discrete representation learning.
arXiv Detail & Related papers (2024-09-23T09:33:53Z) - Optimizing Training Trajectories in Variational Autoencoders via Latent
Bayesian Optimization Approach [0.0]
Unsupervised and semi-supervised ML methods have become widely adopted across multiple areas of physics, chemistry, and materials sciences.
We propose a latent Bayesian optimization (zBO) approach for the hyper parameter trajectory optimization for the unsupervised and semi-supervised ML.
We demonstrate an application of this method for finding joint discrete and continuous rotationally invariant representations for MNIST and experimental data of a plasmonic nanoparticles material system.
arXiv Detail & Related papers (2022-06-30T23:41:47Z) - Optimization-Induced Graph Implicit Nonlinear Diffusion [64.39772634635273]
We propose a new kind of graph convolution variants, called Graph Implicit Diffusion (GIND)
GIND implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing.
We show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective.
arXiv Detail & Related papers (2022-06-29T06:26:42Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Symmetry-driven graph neural networks [1.713291434132985]
We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
arXiv Detail & Related papers (2021-05-28T18:54:12Z) - Parameterized Hypercomplex Graph Neural Networks for Graph
Classification [1.1852406625172216]
We develop graph neural networks that leverage the properties of hypercomplex feature transformation.
In particular, in our proposed class of models, the multiplication rule specifying the algebra itself is inferred from the data during training.
We test our proposed hypercomplex GNN on several open graph benchmark datasets and show that our models reach state-of-the-art performance.
arXiv Detail & Related papers (2021-03-30T18:01:06Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z) - RatE: Relation-Adaptive Translating Embedding for Knowledge Graph
Completion [51.64061146389754]
We propose a relation-adaptive translation function built upon a novel weighted product in complex space.
We then present our Relation-adaptive translating Embedding (RatE) approach to score each graph triple.
arXiv Detail & Related papers (2020-10-10T01:30:30Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.