Integer Programming for Causal Structure Learning in the Presence of
Latent Variables
- URL: http://arxiv.org/abs/2102.03129v1
- Date: Fri, 5 Feb 2021 12:10:16 GMT
- Title: Integer Programming for Causal Structure Learning in the Presence of
Latent Variables
- Authors: Rui Chen, Sanjeeb Dash, Tian Gao
- Abstract summary: We propose a novel exact score-based method that solves an integer programming (IP) formulation and returns a score-maximizing ancestral ADMG for a set of continuous variables.
In particular, we generalize the state-of-the-art IP model for DAG learning problems and derive new classes of valid inequalities to formalize the IP-based ADMG learning model.
- Score: 28.893119229428713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of finding an ancestral acyclic directed mixed graph (ADMG) that
represents the causal relationships between a set of variables is an important
area of research for causal inference. However, most of existing score-based
structure learning methods focus on learning the directed acyclic graph (DAG)
without latent variables. A number of score-based methods have recently been
proposed for the ADMG learning, yet they are heuristic in nature and do not
guarantee an optimal solution. We propose a novel exact score-based method that
solves an integer programming (IP) formulation and returns a score-maximizing
ancestral ADMG for a set of continuous variables. In particular, we generalize
the state-of-the-art IP model for DAG learning problems and derive new classes
of valid inequalities to formalize the IP-based ADMG learning model.
Empirically our model can be solved efficiently for medium-sized problems and
achieves better accuracy than state-of-the-art score-based methods as well as
benchmark constraint-based methods.
Related papers
- DIVE: Subgraph Disagreement for Graph Out-of-Distribution Generalization [44.291382840373]
This paper addresses the challenge of out-of-distribution generalization in graph machine learning.
Traditional graph learning algorithms falter in real-world scenarios where this assumption fails.
A principal factor contributing to this suboptimal performance is the inherent simplicity bias of neural networks.
arXiv Detail & Related papers (2024-08-08T12:08:55Z) - Scalable Structure Learning for Sparse Context-Specific Systems [0.0]
We present an algorithm for learning context-specific models that scales to hundreds of variables.
Our method is shown to perform well on synthetic data and real world examples.
arXiv Detail & Related papers (2024-02-12T16:28:52Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - A General Framework for Sample-Efficient Function Approximation in
Reinforcement Learning [132.45959478064736]
We propose a general framework that unifies model-based and model-free reinforcement learning.
We propose a novel estimation function with decomposable structural properties for optimization-based exploration.
Under our framework, a new sample-efficient algorithm namely OPtimization-based ExploRation with Approximation (OPERA) is proposed.
arXiv Detail & Related papers (2022-09-30T17:59:16Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Pretrained Cost Model for Distributed Constraint Optimization Problems [37.79733538931925]
Distributed Constraint Optimization Problems (DCOPs) are an important subclass of optimization problems.
We propose a novel directed acyclic graph schema representation for DCOPs and leverage the Graph Attention Networks (GATs) to embed graph representations.
Our model, GAT-PCM, is then pretrained with optimally labelled data in an offline manner, so as to boost a broad range of DCOP algorithms.
arXiv Detail & Related papers (2021-12-08T09:24:10Z) - Joint Stochastic Approximation and Its Application to Learning Discrete
Latent Variable Models [19.07718284287928]
We show that the difficulty of obtaining reliable gradients for the inference model and the drawback of indirectly optimizing the target log-likelihood can be gracefully addressed.
We propose to directly maximize the target log-likelihood and simultaneously minimize the inclusive divergence between the posterior and the inference model.
The resulting learning algorithm is called joint SA (JSA)
arXiv Detail & Related papers (2020-05-28T13:50:08Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.