Deriving Bounds and Inequality Constraints Using LogicalRelations Among
Counterfactuals
- URL: http://arxiv.org/abs/2007.00628v1
- Date: Wed, 1 Jul 2020 17:25:44 GMT
- Title: Deriving Bounds and Inequality Constraints Using LogicalRelations Among
Counterfactuals
- Authors: Noam Finkelstein, Ilya Shpitser
- Abstract summary: We develop a new method for obtaining bounds on causal parameters using rules of probability and restrictions on counterfactuals implied by causal models.
We show that this approach is powerful enough to recover known sharp bounds and tight inequality constraints, and to derive novel bounds and constraints.
- Score: 8.185725740857595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal parameters may not be point identified in the presence of unobserved
confounding. However, information about non-identified parameters, in the form
of bounds, may still be recovered from the observed data in some cases. We
develop a new general method for obtaining bounds on causal parameters using
rules of probability and restrictions on counterfactuals implied by causal
graphical models. We additionally provide inequality constraints on functionals
of the observed data law implied by such causal models. Our approach is
motivated by the observation that logical relations between identified and
non-identified counterfactual events often yield information about
non-identified events. We show that this approach is powerful enough to recover
known sharp bounds and tight inequality constraints, and to derive novel bounds
and constraints.
Related papers
- Algorithmic causal structure emerging through compression [53.52699766206808]
We explore the relationship between causality, symmetry, and compression.
We build on and generalize the known connection between learning and compression to a setting where causal models are not identifiable.
We define algorithmic causality as an alternative definition of causality when traditional assumptions for causal identifiability do not hold.
arXiv Detail & Related papers (2025-02-06T16:50:57Z) - Identifiability Guarantees for Causal Disentanglement from Purely Observational Data [10.482728002416348]
Causal disentanglement aims to learn about latent causal factors behind data.
Recent advances establish identifiability results assuming that interventions on (single) latent factors are available.
We provide a precise characterization of latent factors that can be identified in nonlinear causal models.
arXiv Detail & Related papers (2024-10-31T04:18:29Z) - New Rules for Causal Identification with Background Knowledge [59.733125324672656]
We propose two novel rules for incorporating BK, which offer a new perspective to the open problem.
We show that these rules are applicable in some typical causality tasks, such as determining the set of possible causal effects with observational data.
arXiv Detail & Related papers (2024-07-21T20:21:21Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Transfer Learning with Partially Observable Offline Data via Causal Bounds [8.981637739384674]
In this paper, we investigate transfer learning in partially observable contextual bandits.
Agents operate with incomplete information and limited access to hidden confounders.
We propose an efficient method that discretizes the functional constraints of unknown distributions into linear constraints.
This method takes into account estimation errors and exhibits strong convergence properties, ensuring robust and reliable causal bounds.
arXiv Detail & Related papers (2023-08-07T13:24:50Z) - Learning nonparametric latent causal graphs with unknown interventions [18.6470340274888]
We establish conditions under which latent causal graphs are nonparametrically identifiable.
We do not assume the number of hidden variables is known, and we show that at most one unknown intervention per hidden variable is needed.
arXiv Detail & Related papers (2023-06-05T14:06:35Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Identifying Weight-Variant Latent Causal Models [82.14087963690561]
We find that transitivity acts as a key role in impeding the identifiability of latent causal representations.
Under some mild assumptions, we can show that the latent causal representations can be identified up to trivial permutation and scaling.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal representations and causal relationships among them.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - The Causal Marginal Polytope for Bounding Treatment Effects [9.196779204457059]
We propose a novel way to identify causal effects without constructing a global causal model.
We enforce compatibility between marginals of a causal model and data, without constructing a global causal model.
We call this collection of locally consistent marginals the causal marginal polytope.
arXiv Detail & Related papers (2022-02-28T15:08:22Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Partial Identifiability in Discrete Data With Measurement Error [16.421318211327314]
We show that it is preferable to present bounds under justifiable assumptions than to pursue exact identification under dubious ones.
We use linear programming techniques to produce sharp bounds for factual and counterfactual distributions under measurement error.
arXiv Detail & Related papers (2020-12-23T02:11:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.