Novel Ordering-based Approaches for Causal Structure Learning in the
Presence of Unobserved Variables
- URL: http://arxiv.org/abs/2208.06935v1
- Date: Sun, 14 Aug 2022 23:09:55 GMT
- Title: Novel Ordering-based Approaches for Causal Structure Learning in the
Presence of Unobserved Variables
- Authors: Ehsan Mokhtarian, Mohammadsadegh Khorasani, Jalal Etesami, Negar
Kiyavash
- Abstract summary: We advocate for a novel order called removable order (r-order) as they are advantageous over c-orders for structure learning.
We evaluate the performance and the scalability of our proposed approaches on both real-world and randomly generated networks.
- Score: 22.201414668050123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose ordering-based approaches for learning the maximal ancestral graph
(MAG) of a structural equation model (SEM) up to its Markov equivalence class
(MEC) in the presence of unobserved variables. Existing ordering-based methods
in the literature recover a graph through learning a causal order (c-order). We
advocate for a novel order called removable order (r-order) as they are
advantageous over c-orders for structure learning. This is because r-orders are
the minimizers of an appropriately defined optimization problem that could be
either solved exactly (using a reinforcement learning approach) or
approximately (using a hill-climbing search). Moreover, the r-orders (unlike
c-orders) are invariant among all the graphs in a MEC and include c-orders as a
subset. Given that set of r-orders is often significantly larger than the set
of c-orders, it is easier for the optimization problem to find an r-order
instead of a c-order. We evaluate the performance and the scalability of our
proposed approaches on both real-world and randomly generated networks.
Related papers
- Mirror Natural Evolution Strategies [10.495496415022064]
We focus on the theory of zeroth-order optimization which utilizes both the first-order and second-order information approximated by the zeroth-order queries.
We show that the estimated covariance matrix of textttMiNES converges to the inverse of Hessian matrix of the objective function.
arXiv Detail & Related papers (2023-08-01T11:45:24Z) - Reinforcement Learning Based Query Vertex Ordering Model for Subgraph
Matching [58.39970828272366]
Subgraph matching algorithms enumerate all is embeddings of a query graph in a data graph G.
matching order plays a critical role in time efficiency of these backtracking based subgraph matching algorithms.
In this paper, for the first time we apply the Reinforcement Learning (RL) and Graph Neural Networks (GNNs) techniques to generate the high-quality matching order for subgraph matching algorithms.
arXiv Detail & Related papers (2022-01-25T00:10:03Z) - Discovering Non-monotonic Autoregressive Orderings with Variational
Inference [67.27561153666211]
We develop an unsupervised parallelizable learner that discovers high-quality generation orders purely from training data.
We implement the encoder as a Transformer with non-causal attention that outputs permutations in one forward pass.
Empirical results in language modeling tasks demonstrate that our method is context-aware and discovers orderings that are competitive with or even better than fixed orders.
arXiv Detail & Related papers (2021-10-27T16:08:09Z) - A Deep Generative Model for Matrix Reordering [26.86727566323601]
We develop a generative model that learns a latent space of diverse matrix reorderings of a graph.
We construct an intuitive user interface from the learned latent space by creating a map of various matrix reorderings.
This paper introduces a fundamentally new approach to matrix visualization of a graph, where a machine learning model learns to generate diverse matrix reorderings of a graph.
arXiv Detail & Related papers (2021-10-11T02:55:24Z) - An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm
for First-order and Zeroth-order Optimization [111.24899593052851]
Conditional gradient algorithm (also known as the Frank-Wolfe algorithm) has recently regained popularity in the machine learning community.
ARCS is the first zeroth-order conditional gradient sliding type algorithms solving convex problems in zeroth-order optimization.
In first-order optimization, the convergence results of ARCS substantially outperform previous algorithms in terms of the number of gradient query oracle.
arXiv Detail & Related papers (2021-09-18T07:08:11Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Regret minimization in stochastic non-convex learning via a
proximal-gradient approach [80.59047515124198]
Motivated by applications in machine learning and operations, we regret with first-order oracle feedback minimization online constrained problems.
We develop a new prox-grad with guarantees proximal complexity reduction techniques.
arXiv Detail & Related papers (2020-10-13T09:22:21Z) - Zeroth-Order Algorithms for Smooth Saddle-Point Problems [117.44028458220427]
We propose several algorithms to solve saddle-point problems using zeroth-order oracles.
Our analysis shows that our convergence rate for the term is only by a $log n$ factor worse than for the first-order methods.
We also consider a mixed setup and develop 1/2th-order methods that use zeroth-order oracle for the part.
arXiv Detail & Related papers (2020-09-21T14:26:48Z) - On Class Orderings for Incremental Learning [36.39530025852268]
We propose a method to compute various orderings for a dataset.
We evaluate a wide range of state-of-the-art incremental learning methods on the proposed orderings.
arXiv Detail & Related papers (2020-07-04T17:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.