On Exact Sampling in the Two-Variable Fragment of First-Order Logic
- URL: http://arxiv.org/abs/2302.02730v2
- Date: Sat, 6 May 2023 13:43:31 GMT
- Title: On Exact Sampling in the Two-Variable Fragment of First-Order Logic
- Authors: Yuanhong Wang, Juhua Pu, Yuyi Wang, and Ond\v{r}ej Ku\v{z}elka
- Abstract summary: We show that there exists a sampling algorithm for $mathbfFO2$ that runs in time in the domain size.
Our proposed method is constructive, and the resulting sampling algorithms have potential applications in various areas.
- Score: 8.784424696800214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the sampling problem for first-order logic proposed
recently by Wang et al. -- how to efficiently sample a model of a given
first-order sentence on a finite domain? We extend their result for the
universally-quantified subfragment of two-variable logic $\mathbf{FO}^2$
($\mathbf{UFO}^2$) to the entire fragment of $\mathbf{FO}^2$. Specifically, we
prove the domain-liftability under sampling of $\mathbf{FO}^2$, meaning that
there exists a sampling algorithm for $\mathbf{FO}^2$ that runs in time
polynomial in the domain size. We then further show that this result continues
to hold even in the presence of counting constraints, such as $\forall
x\exists_{=k} y: \varphi(x,y)$ and $\exists_{=k} x\forall y: \varphi(x,y)$, for
some quantifier-free formula $\varphi(x,y)$. Our proposed method is
constructive, and the resulting sampling algorithms have potential applications
in various areas, including the uniform generation of combinatorial structures
and sampling in statistical-relational models such as Markov logic networks and
probabilistic logic programs.
Related papers
- Simple and Nearly-Optimal Sampling for Rank-1 Tensor Completion via Gauss-Jordan [49.1574468325115]
We revisit the sample and computational complexity of completing a rank-1 tensor in $otimes_i=1N mathbbRd$.
We present a characterization of the problem which admits an algorithm amounting to Gauss-Jordan on a pair of random linear systems.
arXiv Detail & Related papers (2024-08-10T04:26:19Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Testing Closeness of Multivariate Distributions via Ramsey Theory [40.926523210945064]
We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions.
Specifically, given sample access to two unknown distributions $mathbf p, mathbf q$ on $mathbb Rd$, we want to distinguish between the case that $mathbf p=mathbf q$ versus $|mathbf p-mathbf q|_A_k > epsilon$.
Our main result is the first closeness tester for this problem with em sub-learning sample complexity in any fixed dimension.
arXiv Detail & Related papers (2023-11-22T04:34:09Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Tight Bounds on the Hardness of Learning Simple Nonparametric Mixtures [9.053430799456587]
We study the problem of learning nonparametric distributions in a finite mixture.
We establish tight bounds on the sample complexity for learning the component distributions in such models.
arXiv Detail & Related papers (2022-03-28T23:53:48Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Structured Logconcave Sampling with a Restricted Gaussian Oracle [23.781520510778716]
We give algorithms for sampling several structured logconcave families to high accuracy.
We further develop a reduction framework, inspired by proximal point methods in convex optimization.
arXiv Detail & Related papers (2020-10-07T01:43:07Z) - Proving Non-Inclusion of B\"uchi Automata based on Monte Carlo Sampling [19.09848789158933]
We present a specific algorithm for proving B"uchi automata non-inclusion $mathcalL(mathcalA) notsubseteq mathcalL(mathcalB)$.
$mathsfIMC2$ is a fast and reliable way to find counterexamples to B"uchi automata inclusion.
arXiv Detail & Related papers (2020-07-05T10:17:02Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - A Randomized Algorithm to Reduce the Support of Discrete Measures [79.55586575988292]
Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms.
We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by "greedy geometric sampling"
We then study its properties, and benchmark it on synthetic and real-world data to show that it can be very beneficial in the $Ngg n$ regime.
arXiv Detail & Related papers (2020-06-02T16:38:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.