Efficient Permutation Discovery in Causal DAGs
- URL: http://arxiv.org/abs/2011.03610v1
- Date: Fri, 6 Nov 2020 21:56:41 GMT
- Title: Efficient Permutation Discovery in Causal DAGs
- Authors: Chandler Squires, Joshua Amaniampong, Caroline Uhler
- Abstract summary: We introduce an efficient algorithm for finding sparse permutations of a directed acyclic graph.
We show that our method with depth $w$ runs in $O(pw+3)$ time.
We also compare our algorithm to provably consistent causal structure learning algorithms, such as the PC algorithm, GES, and GSP, and show that our method achieves comparable performance with a shorter runtime.
- Score: 9.22466799504763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of learning a directed acyclic graph (DAG) up to Markov
equivalence is equivalent to the problem of finding a permutation of the
variables that induces the sparsest graph. Without additional assumptions, this
task is known to be NP-hard. Building on the minimum degree algorithm for
sparse Cholesky decomposition, but utilizing DAG-specific problem structure, we
introduce an efficient algorithm for finding such sparse permutations. We show
that on jointly Gaussian distributions, our method with depth $w$ runs in
$O(p^{w+3})$ time. We compare our method with $w = 1$ to algorithms for finding
sparse elimination orderings of undirected graphs, and show that taking
advantage of DAG-specific problem structure leads to a significant improvement
in the discovered permutation. We also compare our algorithm to provably
consistent causal structure learning algorithms, such as the PC algorithm, GES,
and GSP, and show that our method achieves comparable performance with a
shorter runtime. Thus, our method can be used on its own for causal structure
discovery. Finally, we show that there exist dense graphs on which our method
achieves almost perfect performance, so that unlike most existing causal
structure learning algorithms, the situations in which our algorithm achieves
both good performance and good runtime are not limited to sparse graphs.
Related papers
- A Differentially Private Clustering Algorithm for Well-Clustered Graphs [6.523602840064548]
We provide an efficient ($epsilon,$delta$)-DP algorithm tailored specifically for such graphs.
Our algorithm works for well-clustered graphs with $k$ nearly-balanced clusters.
arXiv Detail & Related papers (2024-03-21T11:57:16Z) - Fast Approximation of Similarity Graphs with Kernel Density Estimation [12.321755440062732]
We present a new algorithm for constructing a similarity graph from a set $X$ of data points in $mathbbRd$.
Our presented algorithm is based on the kernel density estimation problem, and is applicable for arbitrary kernel functions.
arXiv Detail & Related papers (2023-10-21T00:32:47Z) - Optimizing NOTEARS Objectives via Topological Swaps [41.18829644248979]
In this paper, we develop an effective method for a set of candidate algorithms.
At the inner level, given a subject, we utilize off-the-the-art constraints.
The proposed method significantly improves the score of other algorithms.
arXiv Detail & Related papers (2023-05-26T21:49:37Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Causal Bandits without Graph Learning [28.021500949026766]
We develop an efficient algorithm for finding the parent node of the reward node using atomic interventions.
We extend our algorithm to the case when the reward node has multiple parents.
arXiv Detail & Related papers (2023-01-26T20:27:14Z) - A Metaheuristic Algorithm for Large Maximum Weight Independent Set
Problems [58.348679046591265]
Given a node-weighted graph, find a set of independent (mutually nonadjacent) nodes whose node-weight sum is maximum.
Some of the graphs airsing in this application are large, having hundreds of thousands of nodes and hundreds of millions of edges.
We develop a new local search algorithm, which is a metaheuristic in the greedy randomized adaptive search framework.
arXiv Detail & Related papers (2022-03-28T21:34:16Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Structure learning in polynomial time: Greedy algorithms, Bregman
information, and exponential families [12.936601424755944]
We study a general greedy score-based algorithm for learning DAGs.
We show how recent algorithm-time algorithms for learning DAG models are a special case of this algorithm.
This observation suggests new score functions and optimality conditions based on the duality between Bregman divergences and exponential families.
arXiv Detail & Related papers (2021-10-10T06:37:51Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.