Geometry-Aware Normalizing Wasserstein Flows for Optimal Causal
Inference
- URL: http://arxiv.org/abs/2311.18826v4
- Date: Thu, 1 Feb 2024 18:59:44 GMT
- Title: Geometry-Aware Normalizing Wasserstein Flows for Optimal Causal
Inference
- Authors: Kaiwen Hou
- Abstract summary: This paper presents a groundbreaking approach to causal inference by integrating continuous normalizing flows with parametric submodels.
We leverage optimal transport and Wasserstein gradient flows to develop causal inference methodologies with minimal variance in finite-sample settings.
Preliminary experiments showcase our method's superiority, yielding lower mean-squared errors compared to standard flows.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a groundbreaking approach to causal inference by
integrating continuous normalizing flows (CNFs) with parametric submodels,
enhancing their geometric sensitivity and improving upon traditional Targeted
Maximum Likelihood Estimation (TMLE). Our method employs CNFs to refine TMLE,
optimizing the Cram\'er-Rao bound and transitioning from a predefined
distribution $p_0$ to a data-driven distribution $p_1$. We innovate further by
embedding Wasserstein gradient flows within Fokker-Planck equations, thus
imposing geometric structures that boost the robustness of CNFs, particularly
in optimal transport theory.
Our approach addresses the disparity between sample and population
distributions, a critical factor in parameter estimation bias. We leverage
optimal transport and Wasserstein gradient flows to develop causal inference
methodologies with minimal variance in finite-sample settings, outperforming
traditional methods like TMLE and AIPW. This novel framework, centered on
Wasserstein gradient flows, minimizes variance in efficient influence functions
under distribution $p_t$. Preliminary experiments showcase our method's
superiority, yielding lower mean-squared errors compared to standard flows,
thereby demonstrating the potential of geometry-aware normalizing Wasserstein
flows in advancing statistical modeling and inference.
Related papers
- Kernel Approximation of Fisher-Rao Gradient Flows [52.154685604660465]
We present a rigorous investigation of Fisher-Rao and Wasserstein type gradient flows concerning their gradient structures, flow equations, and their kernel approximations.
Specifically, we focus on the Fisher-Rao geometry and its various kernel-based approximations, developing a principled theoretical framework.
arXiv Detail & Related papers (2024-10-27T22:52:08Z) - Straightness of Rectified Flow: A Theoretical Insight into Wasserstein Convergence [54.580605276017096]
Diffusion models have emerged as a powerful tool for image generation and denoising.
Recently, Liu et al. designed a novel alternative generative model Rectified Flow (RF)
RF aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - On an Edge-Preserving Variational Model for Optical Flow Estimation [0.0]
We propose an edge-preserving $L1$ regularization approach to optical flow estimation.
The proposed method achieves the best average angular and end-point errors compared to some of the state-of-the-art Horn and Schunck based variational methods.
arXiv Detail & Related papers (2022-07-21T04:46:16Z) - Mean-field Variational Inference via Wasserstein Gradient Flow [8.05603983337769]
Variational inference, such as the mean-field (MF) approximation, requires certain conjugacy structures for efficient computation.
We introduce a general computational framework to implement MFal inference for Bayesian models, with or without latent variables, using the Wasserstein gradient flow (WGF)
We propose a new constraint-free function approximation method using neural networks to numerically realize our algorithm.
arXiv Detail & Related papers (2022-07-17T04:05:32Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.