Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent
DAGs with Applications
- URL: http://arxiv.org/abs/2205.02654v3
- Date: Mon, 21 Aug 2023 16:31:24 GMT
- Title: Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent
DAGs with Applications
- Authors: Marcel Wien\"obst, Max Bannach, Maciej Li\'skiewicz
- Abstract summary: Counting and sampling directed acyclic graphs from a Markov equivalence class are fundamental tasks in causal analysis.
We show that these tasks can be performed in graphical time, solving a long-standing open problem in this area.
Our algorithms are effective and easily implementable.
- Score: 6.03124479597323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counting and sampling directed acyclic graphs from a Markov equivalence class
are fundamental tasks in graphical causal analysis. In this paper we show that
these tasks can be performed in polynomial time, solving a long-standing open
problem in this area. Our algorithms are effective and easily implementable. As
we show in experiments, these breakthroughs make thought-to-be-infeasible
strategies in active learning of causal structures and causal effect
identification with regard to a Markov equivalence class practically
applicable.
Related papers
- On the Markov Property of Neural Algorithmic Reasoning: Analyses and
Methods [94.72563337153268]
We present ForgetNet, which does not use historical embeddings and thus is consistent with the Markov nature of the tasks.
We also introduce G-ForgetNet, which uses a gating mechanism to allow for the selective integration of historical embeddings.
Our experiments, based on the CLRS-30 algorithmic reasoning benchmark, demonstrate that both ForgetNet and G-ForgetNet achieve better generalization capability than existing methods.
arXiv Detail & Related papers (2024-03-07T22:35:22Z) - Unleashing the Potential of Regularization Strategies in Learning with
Noisy Labels [65.92994348757743]
We demonstrate that a simple baseline using cross-entropy loss, combined with widely used regularization strategies can outperform state-of-the-art methods.
Our findings suggest that employing a combination of regularization strategies can be more effective than intricate algorithms in tackling the challenges of learning with noisy labels.
arXiv Detail & Related papers (2023-07-11T05:58:20Z) - A Tutorial Introduction to Reinforcement Learning [1.9544213396776275]
We present a brief survey of Reinforcement Learning (RL), with particular emphasis on ApproximationSA as a unifying theme.
The scope of the paper includes Markov Reward Processes, Markov Decision Processes, Approximation algorithms, and widely used algorithms such as Temporal Difference Learning and $Q$-learning.
arXiv Detail & Related papers (2023-04-03T08:50:58Z) - Efficient Enumeration of Markov Equivalent DAGs [6.288056740658763]
In this paper, we present the first linear-time delay algorithm.
On the theoretical side, we show that our algorithm can be generalized to enumerate DAGs represented by models that incorporate background knowledge.
We also provide intriguing insights into Markov equivalence itself.
arXiv Detail & Related papers (2023-01-28T14:35:39Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Scaling up Continuous-Time Markov Chains Helps Resolve
Underspecification [42.97840843148334]
We develop an approximate likelihood method for learning continuous-time Markov chains, which can scale to hundreds of items and is orders of magnitude faster than previous methods.
We demonstrate the effectiveness of our approach on synthetic and real cancer data.
arXiv Detail & Related papers (2021-07-06T21:14:49Z) - Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent
DAGs [6.252236971703546]
Counting and uniform sampling of directed acyclic graphs (DAGs) are fundamental in causal analysis.
In this paper, we show that these tasks can be performed from graphical time, solving a long-standing open problem in this area.
Our algorithms are effective and easily implementable.
arXiv Detail & Related papers (2020-12-17T15:47:15Z) - Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra [53.46106569419296]
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression.
We argue that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise.
arXiv Detail & Related papers (2020-11-09T01:13:07Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.