A Probit Tensor Factorization Model For Relational Learning
- URL: http://arxiv.org/abs/2111.03943v2
- Date: Tue, 9 Nov 2021 02:15:28 GMT
- Title: A Probit Tensor Factorization Model For Relational Learning
- Authors: Ye Liu, Rui Song, Wenbin Lu, Yanghua Xiao
- Abstract summary: We propose a binary tensor factorization model with probit link, which inherits the computation efficiency from the classic tensor factorization model.
Our proposed probit tensor factorization (PTF) model shows advantages in both the prediction accuracy and interpretability.
- Score: 31.613211987639296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the proliferation of knowledge graphs, modeling data with complex
multirelational structure has gained increasing attention in the area of
statistical relational learning. One of the most important goals of statistical
relational learning is link prediction, i.e., predicting whether certain
relations exist in the knowledge graph. A large number of models and algorithms
have been proposed to perform link prediction, among which tensor factorization
method has proven to achieve state-of-the-art performance in terms of
computation efficiency and prediction accuracy. However, a common drawback of
the existing tensor factorization models is that the missing relations and
non-existing relations are treated in the same way, which results in a loss of
information. To address this issue, we propose a binary tensor factorization
model with probit link, which not only inherits the computation efficiency from
the classic tensor factorization model but also accounts for the binary nature
of relational data. Our proposed probit tensor factorization (PTF) model shows
advantages in both the prediction accuracy and interpretability
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Factor Augmented Tensor-on-Tensor Neural Networks [3.0040661953201475]
We propose a Factor Augmented-on-Tensor Neural Network (FATTNN) that integrates tensor factor models into deep neural networks.
We show that our proposed algorithms achieve substantial increases in prediction accuracy and significant reductions in computational time.
arXiv Detail & Related papers (2024-05-30T01:56:49Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Relation-dependent Contrastive Learning with Cluster Sampling for
Inductive Relation Prediction [30.404149577013595]
We introduce Relation-dependent Contrastive Learning (ReCoLe) for inductive relation prediction.
GNN-based encoder is optimized by contrastive learning, which ensures satisfactory performance on long-tail relations.
Experimental results suggest that ReCoLe outperforms state-of-the-art methods on commonly used inductive datasets.
arXiv Detail & Related papers (2022-11-22T13:30:49Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Beyond Marginal Uncertainty: How Accurately can Bayesian Regression
Models Estimate Posterior Predictive Correlations? [13.127549105535623]
It is often more useful to estimate predictive correlations between the function values at different input locations.
We first consider a downstream task which depends on posterior predictive correlations: transductive active learning (TAL)
Since TAL is too expensive and indirect to guide development of algorithms, we introduce two metrics which more directly evaluate the predictive correlations.
arXiv Detail & Related papers (2020-11-06T03:48:59Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - LowFER: Low-rank Bilinear Pooling for Link Prediction [4.110108749051657]
We propose a factorized bilinear pooling model, commonly used in multi-modal learning, for better fusion of entities and relations.
Our model naturally generalizes decomposition Tucker based TuckER model, which has been shown to generalize other models.
We evaluate on real-world datasets, reaching on par or state-of-the-art performance.
arXiv Detail & Related papers (2020-08-25T07:33:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.