Context-Specific Likelihood Weighting
- URL: http://arxiv.org/abs/2101.09791v3
- Date: Sat, 27 Feb 2021 09:46:24 GMT
- Title: Context-Specific Likelihood Weighting
- Authors: Nitesh Kumar and Ond\v{r}ej Ku\v{z}elka
- Abstract summary: We introduce context-specific likelihood weighting (CS-LW) for approximate inference.
Unlike the standard likelihood weighting, CS-LW is based on partial assignments of random variables.
We empirically show that CS-LW is competitive with state-of-the-art algorithms for approximate inference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sampling is a popular method for approximate inference when exact inference
is impractical. Generally, sampling algorithms do not exploit context-specific
independence (CSI) properties of probability distributions. We introduce
context-specific likelihood weighting (CS-LW), a new sampling methodology,
which besides exploiting the classical conditional independence properties,
also exploits CSI properties. Unlike the standard likelihood weighting, CS-LW
is based on partial assignments of random variables and requires fewer samples
for convergence due to the sampling variance reduction. Furthermore, the speed
of generating samples increases. Our novel notion of contextual assignments
theoretically justifies CS-LW. We empirically show that CS-LW is competitive
with state-of-the-art algorithms for approximate inference in the presence of a
significant amount of CSIs.
Related papers
- Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods [59.779795063072655]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems.
We analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity.
arXiv Detail & Related papers (2024-08-25T04:07:18Z) - Policy Gradient with Active Importance Sampling [55.112959067035916]
Policy gradient (PG) methods significantly benefit from IS, enabling the effective reuse of previously collected samples.
However, IS is employed in RL as a passive tool for re-weighting historical samples.
We look for the best behavioral policy from which to collect samples to reduce the policy gradient variance.
arXiv Detail & Related papers (2024-05-09T09:08:09Z) - Invariant Causal Prediction with Local Models [52.161513027831646]
We consider the task of identifying the causal parents of a target variable among a set of candidates from observational data.
We introduce a practical method called L-ICP ($textbfL$ocalized $textbfI$nvariant $textbfCa$usal $textbfP$rediction), which is based on a hypothesis test for parent identification using a ratio of minimum and maximum statistics.
arXiv Detail & Related papers (2024-01-10T15:34:42Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - On the connection between least squares, regularization, and classical shadows [17.633238342851925]
We show that both RLS and CS can be viewed as regularizers for the underdetermined regime.
We evaluate RLS and CS from three distinct angles: the tradeoff in bias and variance, mismatch between the expected and actual measurement distributions, and the interplay between the number of measurements and number of shots per measurement.
arXiv Detail & Related papers (2023-10-25T18:39:08Z) - Stable Probability Weighting: Large-Sample and Finite-Sample Estimation
and Inference Methods for Heterogeneous Causal Effects of Multivalued
Treatments Under Limited Overlap [0.0]
I propose new practical large-sample and finite-sample methods for estimating and inferring heterogeneous causal effects.
I develop a general principle called "Stable Probability Weighting"
I also propose new finite-sample inference methods for testing a general class of weak null hypotheses.
arXiv Detail & Related papers (2023-01-13T18:52:18Z) - BR-SNIS: Bias Reduced Self-Normalized Importance Sampling [11.150337082767862]
Importance Sampling (IS) is a method for approximating expectations under a target distribution using independent samples from a proposal distribution and the associated importance weights.
We propose a new method, BR-SNIS, whose complexity is essentially the same as that of SNIS and which significantly reduces bias without increasing the variance.
We furnish the proposed algorithm with rigorous theoretical results, including new bias, variance and high-probability bounds.
arXiv Detail & Related papers (2022-07-13T17:14:10Z) - Adversarial sampling of unknown and high-dimensional conditional
distributions [0.0]
In this paper the sampling method, as well as the inference of the underlying distribution, are handled with a data-driven method known as generative adversarial networks (GAN)
GAN trains two competing neural networks to produce a network that can effectively generate samples from the training set distribution.
It is shown that all the versions of the proposed algorithm effectively sample the target conditional distribution with minimal impact on the quality of the samples.
arXiv Detail & Related papers (2021-11-08T12:23:38Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Compressing Large Sample Data for Discriminant Analysis [78.12073412066698]
We consider the computational issues due to large sample size within the discriminant analysis framework.
We propose a new compression approach for reducing the number of training samples for linear and quadratic discriminant analysis.
arXiv Detail & Related papers (2020-05-08T05:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.