Duality-Induced Regularizer for Tensor Factorization Based Knowledge
Graph Completion
- URL: http://arxiv.org/abs/2011.05816v2
- Date: Tue, 4 May 2021 06:39:44 GMT
- Title: Duality-Induced Regularizer for Tensor Factorization Based Knowledge
Graph Completion
- Authors: Zhanqiu Zhang, Jianyu Cai, Jie Wang
- Abstract summary: We propose a novel regularizer -- namely, DUality-induced RegulArizer (DURA) -- which is effective in improving the performance of existing models.
Experiments show that DURA yields consistent and significant improvements on benchmarks.
- Score: 12.571769130252749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor factorization based models have shown great power in knowledge graph
completion (KGC). However, their performance usually suffers from the
overfitting problem seriously. This motivates various regularizers -- such as
the squared Frobenius norm and tensor nuclear norm regularizers -- while the
limited applicability significantly limits their practical usage. To address
this challenge, we propose a novel regularizer -- namely, DUality-induced
RegulArizer (DURA) -- which is not only effective in improving the performance
of existing models but widely applicable to various methods. The major novelty
of DURA is based on the observation that, for an existing tensor factorization
based KGC model (primal), there is often another distance based KGC model
(dual) closely associated with it. Experiments show that DURA yields consistent
and significant improvements on benchmarks.
Related papers
- Enabling Causal Discovery in Post-Nonlinear Models with Normalizing Flows [6.954510776782872]
Post-nonlinear (PNL) causal models stand out as a versatile and adaptable framework for modeling causal relationships.
We introduce CAF-PoNo, harnessing the power of the normalizing flows architecture to enforce the crucial invertibility constraint in PNL models.
Our method precisely reconstructs the hidden noise, which plays a vital role in cause-effect identification.
arXiv Detail & Related papers (2024-07-06T07:19:21Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly
Detection [89.49600182243306]
We reformulate the reconstruction process using a diffusion model into a noise-to-norm paradigm.
We propose a rapid one-step denoising paradigm, significantly faster than the traditional iterative denoising in diffusion models.
The segmentation sub-network predicts pixel-level anomaly scores using the input image and its anomaly-free restoration.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Duality-Induced Regularizer for Semantic Matching Knowledge Graph
Embeddings [70.390286614242]
We propose a novel regularizer -- namely, DUality-induced RegulArizer (DURA) -- which effectively encourages the entities with similar semantics to have similar embeddings.
Experiments demonstrate that DURA consistently and significantly improves the performance of state-of-the-art semantic matching models.
arXiv Detail & Related papers (2022-03-24T09:24:39Z) - Deep Recurrent Modelling of Granger Causality with Latent Confounding [0.0]
We propose a deep learning-based approach to model non-linear Granger causality by directly accounting for latent confounders.
We demonstrate the model performance on non-linear time series for which the latent confounder influences the cause and effect with different time lags.
arXiv Detail & Related papers (2022-02-23T03:26:22Z) - A Distributionally Robust Area Under Curve Maximization Model [1.370633147306388]
We propose two new distributionally robust AUC models (DR-AUC)
DR-AUC models rely on the Kantorovich metric and approximate the AUC with the hinge loss function.
numerical experiments show that the proposed DR-AUC models perform better in general and in particular improve the worst-case out-of-sample performance.
arXiv Detail & Related papers (2020-02-18T02:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.