Reframed GES with a Neural Conditional Dependence Measure
- URL: http://arxiv.org/abs/2206.08531v1
- Date: Fri, 17 Jun 2022 03:29:08 GMT
- Title: Reframed GES with a Neural Conditional Dependence Measure
- Authors: Xinwei Shen, Shengyu Zhu, Jiji Zhang, Shoubo Hu, Zhitang Chen
- Abstract summary: We revisit the Greedy Equivalence Search (GES) algorithm, which is widely cited as a score-based algorithm for learning the Markov equivalence class (MEC)
We present a reframing of the GES algorithm, which is more flexible than the standard score-based version.
We propose a neural conditional dependence measure, which utilizes the expressive power of deep neural networks.
- Score: 20.47061693587848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a nonparametric setting, the causal structure is often identifiable only
up to Markov equivalence, and for the purpose of causal inference, it is useful
to learn a graphical representation of the Markov equivalence class (MEC). In
this paper, we revisit the Greedy Equivalence Search (GES) algorithm, which is
widely cited as a score-based algorithm for learning the MEC of the underlying
causal structure. We observe that in order to make the GES algorithm consistent
in a nonparametric setting, it is not necessary to design a scoring metric that
evaluates graphs. Instead, it suffices to plug in a consistent estimator of a
measure of conditional dependence to guide the search. We therefore present a
reframing of the GES algorithm, which is more flexible than the standard
score-based version and readily lends itself to the nonparametric setting with
a general measure of conditional dependence. In addition, we propose a neural
conditional dependence (NCD) measure, which utilizes the expressive power of
deep neural networks to characterize conditional independence in a
nonparametric manner. We establish the optimality of the reframed GES algorithm
under standard assumptions and the consistency of using our NCD estimator to
decide conditional independence. Together these results justify the proposed
approach. Experimental results demonstrate the effectiveness of our method in
causal discovery, as well as the advantages of using our NCD measure over
kernel-based measures.
Related papers
- kNN Algorithm for Conditional Mean and Variance Estimation with
Automated Uncertainty Quantification and Variable Selection [8.429136647141487]
We introduce a kNN-based regression method that synergizes the scalability and adaptability of traditional non-parametric kNN models.
This method focuses on accurately estimating the conditional mean and variance of random response variables.
It is particularly notable in biomedical applications as demonstrated in two case studies.
arXiv Detail & Related papers (2024-02-02T18:54:18Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - CoLiDE: Concomitant Linear DAG Estimation [12.415463205960156]
We deal with the problem of learning acyclic graph structure from observational data to a linear equation.
We propose a new convex score function for sparsity-aware learning DAGs.
arXiv Detail & Related papers (2023-10-04T15:32:27Z) - Multi-kernel Correntropy-based Orientation Estimation of IMUs: Gradient
Descent Methods [3.8286082196845466]
Correntropy-based descent gradient (CGD) and correntropy-based decoupled orientation estimation (CDOE)
Traditional methods rely on the mean squared error (MSE) criterion, making them vulnerable to external acceleration and magnetic interference.
New algorithms demonstrate significantly lower computational complexity than Kalman filter-based approaches.
arXiv Detail & Related papers (2023-04-13T13:57:33Z) - An evaluation framework for dimensionality reduction through sectional
curvature [59.40521061783166]
In this work, we aim to introduce the first highly non-supervised dimensionality reduction performance metric.
To test its feasibility, this metric has been used to evaluate the performance of the most commonly used dimension reduction algorithms.
A new parameterized problem instance generator has been constructed in the form of a function generator.
arXiv Detail & Related papers (2023-03-17T11:59:33Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Consistency of Anchor-based Spectral Clustering [0.0]
Anchor-based techniques reduce the computational complexity of spectral clustering algorithms.
We show that it is amenable to rigorous analysis, as well as being effective in practice.
We find that it is competitive with the state-of-the-art LSC method of Chen and Cai.
arXiv Detail & Related papers (2020-06-24T18:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.