Probabilistic Box Embeddings for Uncertain Knowledge Graph Reasoning
- URL: http://arxiv.org/abs/2104.04597v1
- Date: Fri, 9 Apr 2021 21:01:52 GMT
- Title: Probabilistic Box Embeddings for Uncertain Knowledge Graph Reasoning
- Authors: Xuelu Chen, Michael Boratko, Muhao Chen, Shib Sankar Dasgupta, Xiang
Lorraine Li, Andrew McCallum
- Abstract summary: We propose BEUrRE, a novel uncertain knowledge graph embedding method with calibrated probabilistic semantics.
experiments on two benchmark datasets show that BEUrRE consistently outperforms baselines on confidence prediction and fact ranking.
- Score: 36.34170367603253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge bases often consist of facts which are harvested from a variety of
sources, many of which are noisy and some of which conflict, resulting in a
level of uncertainty for each triple. Knowledge bases are also often
incomplete, prompting the use of embedding methods to generalize from known
facts, however, existing embedding methods only model triple-level uncertainty,
and reasoning results lack global consistency. To address these shortcomings,
we propose BEUrRE, a novel uncertain knowledge graph embedding method with
calibrated probabilistic semantics. BEUrRE models each entity as a box (i.e.
axis-aligned hyperrectangle) and relations between two entities as affine
transforms on the head and tail entity boxes. The geometry of the boxes allows
for efficient calculation of intersections and volumes, endowing the model with
calibrated probabilistic semantics and facilitating the incorporation of
relational constraints. Extensive experiments on two benchmark datasets show
that BEUrRE consistently outperforms baselines on confidence prediction and
fact ranking due to its probabilistic calibration and ability to capture
high-order dependencies among facts.
Related papers
- Calibration-Aware Bayesian Learning [37.82259435084825]
This paper proposes an integrated framework, referred to as calibration-aware Bayesian neural networks (CA-BNNs)
It applies both data-dependent or data-independent regularizers while optimizing over a variational distribution as in Bayesian learning.
Numerical results validate the advantages of the proposed approach in terms of expected calibration error (ECE) and reliability diagrams.
arXiv Detail & Related papers (2023-05-12T14:19:15Z) - On data-driven chance constraint learning for mixed-integer optimization
problems [0.0]
We develop a Chance Constraint Learning (CCL) methodology with a focus on mixed-integer linear optimization problems.
CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables.
An open-access software has been developed to be used by practitioners.
arXiv Detail & Related papers (2022-07-08T11:54:39Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Evidential Turing Processes [11.021440340896786]
We introduce an original combination of evidential deep learning, neural processes, and neural Turing machines.
We observe our method on three image classification benchmarks and two neural net architectures.
arXiv Detail & Related papers (2021-06-02T15:09:20Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z) - Modal Uncertainty Estimation via Discrete Latent Representation [4.246061945756033]
We introduce a deep learning framework that learns the one-to-many mappings between the inputs and outputs, together with faithful uncertainty measures.
Our framework demonstrates significantly more accurate uncertainty estimation than the current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-25T05:29:34Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.