Lower Bounds for Approximate Knowledge Compilation
- URL: http://arxiv.org/abs/2011.13721v1
- Date: Fri, 27 Nov 2020 13:11:32 GMT
- Title: Lower Bounds for Approximate Knowledge Compilation
- Authors: Alexis de Colnet and Stefan Mengel
- Abstract summary: We focus on circuits in deterministic decomposable negation normal form (d-DNNF)
We formalize two notions of approximation: weak approximation and strong approximation.
We show lower bounds for approximation by d-DNNF, complementing the positive results from the literature.
- Score: 7.538482310185135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge compilation studies the trade-off between succinctness and
efficiency of different representation languages. For many languages, there are
known strong lower bounds on the representation size, but recent work shows
that, for some languages, one can bypass these bounds using approximate
compilation. The idea is to compile an approximation of the knowledge for which
the number of errors can be controlled. We focus on circuits in deterministic
decomposable negation normal form (d-DNNF), a compilation language suitable in
contexts such as probabilistic reasoning, as it supports efficient model
counting and probabilistic inference. Moreover, there are known size lower
bounds for d-DNNF which by relaxing to approximation one might be able to
avoid. In this paper we formalize two notions of approximation: weak
approximation which has been studied before in the decision diagram literature
and strong approximation which has been used in recent algorithmic results. We
then show lower bounds for approximation by d-DNNF, complementing the positive
results from the literature.
Related papers
- Efficient Nearest Neighbor based Uncertainty Estimation for Natural Language Processing Tasks [26.336947440529713]
$k$-Nearest Neighbor Uncertainty Estimation ($k$NN-UE) is an uncertainty estimation method that uses the distances from the neighbors and label-existence ratio of neighbors.
Our experiments show that our proposed method outperforms the baselines or recent density-based methods in confidence calibration, selective prediction, and out-of-distribution detection.
arXiv Detail & Related papers (2024-07-02T10:33:31Z) - Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems [5.208405959764274]
We introduce a methodology resembling AFT that can utilize priorly computed upper bounds to more precisely capture semantics.
We demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and negation as failure) knowledge bases by extending the state-of-the-art approximator.
arXiv Detail & Related papers (2023-07-21T01:08:15Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - A deep learning approach to the probabilistic numerical solution of
path-dependent partial differential equations [1.09650651784511]
PPDE solutions can be approximated by a probabilistic representation, implemented in the literature by the estimation of conditional expectations using regression.
We overcome this limitation by the use of deep learning methods, and we show that this setting allows for the derivation of the bounds of error on the approximation of conditional expectations.
In comparison with other deep learning approaches, our algorithm appears to be more accurate, especially in large dimensions.
arXiv Detail & Related papers (2022-09-28T14:34:58Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Reasoning Through Memorization: Nearest Neighbor Knowledge Graph
Embeddings [29.94706167233985]
kNN-KGE is a new knowledge graph embedding approach with pre-trained language models.
We compute the nearest neighbors based on the distance in the entity embedding space from the knowledge store.
arXiv Detail & Related papers (2022-01-14T17:35:16Z) - On the Minimal Adversarial Perturbation for Deep Neural Networks with
Provable Estimation Error [65.51757376525798]
The existence of adversarial perturbations has opened an interesting research line on provable robustness.
No provable results have been presented to estimate and bound the error committed.
This paper proposes two lightweight strategies to find the minimal adversarial perturbation.
The obtained results show that the proposed strategies approximate the theoretical distance and robustness for samples close to the classification, leading to provable guarantees against any adversarial attacks.
arXiv Detail & Related papers (2022-01-04T16:40:03Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Lower bounds in multiple testing: A framework based on derandomized
proxies [107.69746750639584]
This paper introduces an analysis strategy based on derandomization, illustrated by applications to various concrete models.
We provide numerical simulations of some of these lower bounds, and show a close relation to the actual performance of the Benjamini-Hochberg (BH) algorithm.
arXiv Detail & Related papers (2020-05-07T19:59:51Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z) - Proximity Preserving Binary Code using Signed Graph-Cut [27.098042566421963]
We introduce a binary embedding framework, called Proximity Preserving Code (PPC), which learns similarity and dissimilarity between data points to create a compact and affinity-preserving binary code.
We show that the proposed approximation is superior to the commonly used spectral methods with respect to both accuracy and complexity.
arXiv Detail & Related papers (2020-02-05T13:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.