Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent
DAGs
- URL: http://arxiv.org/abs/2012.09679v1
- Date: Thu, 17 Dec 2020 15:47:15 GMT
- Title: Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent
DAGs
- Authors: Marcel Wien\"obst and Max Bannach and Maciej Li\'skiewicz
- Abstract summary: Counting and uniform sampling of directed acyclic graphs (DAGs) are fundamental in causal analysis.
In this paper, we show that these tasks can be performed from graphical time, solving a long-standing open problem in this area.
Our algorithms are effective and easily implementable.
- Score: 6.252236971703546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counting and uniform sampling of directed acyclic graphs (DAGs) from a Markov
equivalence class are fundamental tasks in graphical causal analysis. In this
paper, we show that these tasks can be performed in polynomial time, solving a
long-standing open problem in this area. Our algorithms are effective and
easily implementable. Experimental results show that the algorithms
significantly outperform state-of-the-art methods.
Related papers
- EKM: An exact, polynomial-time algorithm for the $K$-medoids problem [1.9405875431318445]
We present EKM: a novel algorithm for solving this problem exactly with worst-case $Oleft(NK+1right)$ complexity.
EKM is developed according to recent advances in transformational programming and generation, using formal program steps.
We show that the wall-clock run time of our algorithm matches the worst-case time complexity analysis on synthetic datasets.
arXiv Detail & Related papers (2024-05-16T15:11:37Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Efficient Enumeration of Markov Equivalent DAGs [6.288056740658763]
In this paper, we present the first linear-time delay algorithm.
On the theoretical side, we show that our algorithm can be generalized to enumerate DAGs represented by models that incorporate background knowledge.
We also provide intriguing insights into Markov equivalence itself.
arXiv Detail & Related papers (2023-01-28T14:35:39Z) - On the Finite-Time Performance of the Knowledge Gradient Algorithm [1.713291434132985]
The knowledge gradient (KG) algorithm is a popular and effective algorithm for the best arm identification (BAI) problem.
We present new theoretical results about the finite-time performance of the KG algorithm.
arXiv Detail & Related papers (2022-06-14T13:42:05Z) - Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent
DAGs with Applications [6.03124479597323]
Counting and sampling directed acyclic graphs from a Markov equivalence class are fundamental tasks in causal analysis.
We show that these tasks can be performed in graphical time, solving a long-standing open problem in this area.
Our algorithms are effective and easily implementable.
arXiv Detail & Related papers (2022-05-05T13:56:13Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - Structure learning in polynomial time: Greedy algorithms, Bregman
information, and exponential families [12.936601424755944]
We study a general greedy score-based algorithm for learning DAGs.
We show how recent algorithm-time algorithms for learning DAG models are a special case of this algorithm.
This observation suggests new score functions and optimality conditions based on the duality between Bregman divergences and exponential families.
arXiv Detail & Related papers (2021-10-10T06:37:51Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra [53.46106569419296]
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression.
We argue that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise.
arXiv Detail & Related papers (2020-11-09T01:13:07Z) - Nearly Linear Row Sampling Algorithm for Quantile Regression [54.75919082407094]
We give a row sampling algorithm for the quantile loss function with sample complexity nearly linear in the dimensionality of the data.
Based upon our row sampling algorithm, we give the fastest known algorithm for quantile regression and a graph sparsification algorithm for balanced directed graphs.
arXiv Detail & Related papers (2020-06-15T13:40:07Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.