Revisiting the General Identifiability Problem
- URL: http://arxiv.org/abs/2206.01081v1
- Date: Thu, 2 Jun 2022 15:07:25 GMT
- Title: Revisiting the General Identifiability Problem
- Authors: Yaroslav Kivva, Ehsan Mokhtarian, Jalal Etesami, Negar Kiyavash
- Abstract summary: We revisit the problem of general identifiability originally introduced in [Lee et al., 2019] for causal inference.
We show that without such an assumption the rules of do-calculus and consequently the proposed algorithm in [Lee et al., 2019] are not sound.
- Score: 22.201414668050123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit the problem of general identifiability originally introduced in
[Lee et al., 2019] for causal inference and note that it is necessary to add
positivity assumption of observational distribution to the original definition
of the problem. We show that without such an assumption the rules of
do-calculus and consequently the proposed algorithm in [Lee et al., 2019] are
not sound. Moreover, adding the assumption will cause the completeness proof in
[Lee et al., 2019] to fail. Under positivity assumption, we present a new
algorithm that is provably both sound and complete. A nice property of this new
algorithm is that it establishes a connection between general identifiability
and classical identifiability by Pearl [1995] through decomposing the general
identifiability problem into a series of classical identifiability
sub-problems.
Related papers
- Explainable Evidential Clustering [0.0]
Real-world data often contain imperfections, characterized by uncertainty and imprecision, which are not well handled by traditional methods.<n>Evidential clustering, based on Dempster-Shafer theory, addresses these challenges.<n>We propose the Iterative Evidential Mistake Minimization (IEMM) algorithm, which provides interpretable and cautious decision tree explanations.
arXiv Detail & Related papers (2025-07-16T12:44:25Z) - Asymptotically Optimal Linear Best Feasible Arm Identification with Fixed Budget [55.938644481736446]
We introduce a novel algorithm for best feasible arm identification that guarantees an exponential decay in the error probability.<n>We validate our algorithm through comprehensive empirical evaluations across various problem instances with different levels of complexity.
arXiv Detail & Related papers (2025-06-03T02:56:26Z) - On the Complexity of Identification in Linear Structural Causal Models [3.44747819522562]
We give a new sound and complete algorithm for generic identification which runs in space.
The paper also presents evidence that identification is computationally hard in general.
arXiv Detail & Related papers (2024-07-17T13:11:26Z) - Causal Effect Identification in a Sub-Population with Latent Variables [22.75558589075695]
The s-ID problem seeks to compute a causal effect in a specific sub-population from the observational data pertaining to the same sub population.
In this paper, we consider an extension of the s-ID problem that allows for the presence of latent variables.
We propose a sound algorithm for the s-ID problem with latent variables.
arXiv Detail & Related papers (2024-05-23T13:25:41Z) - On Identifiability of Conditional Causal Effects [24.95216517499459]
We address the problem of identifiability of an arbitrary conditional causal effect given both the causal graph and a set of any observational and/or interventional distributions of the form $Q[S]:=P(S|do(Vsetminus S))$.
We call this problem conditional generalized identifiability (c-gID in short) and prove the completeness of Pearl's $do$-calculus for the c-gID problem.
arXiv Detail & Related papers (2023-06-19T14:38:06Z) - Benign Underfitting of Stochastic Gradient Descent [72.38051710389732]
We study to what extent may gradient descent (SGD) be understood as a "conventional" learning rule that achieves generalization performance by obtaining a good fit training data.
We analyze the closely related with-replacement SGD, for which an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate.
arXiv Detail & Related papers (2022-02-27T13:25:01Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - Out-of-domain Generalization from a Single Source: A Uncertainty
Quantification Approach [17.334457450818473]
We study a worst-case scenario in generalization: Out-of-domain generalization from a single source.
The goal is to learn a robust model from a single source and expect it to generalize over many unknown distributions.
We propose uncertainty-guided domain generalization to tackle the limitations.
arXiv Detail & Related papers (2021-08-05T23:53:55Z) - Efficient First-Order Contextual Bandits: Prediction, Allocation, and
Triangular Discrimination [82.52105963476703]
A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise.
First-order guarantees are relatively well understood in statistical and online learning.
We show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees.
arXiv Detail & Related papers (2021-07-05T19:20:34Z) - Uncertainty-guided Model Generalization to Unseen Domains [15.813136035004867]
We study a worst-case scenario in generalization: Out-of-domain generalization from a single source.
The goal is to learn a robust model from a single source and expect it to generalize over many unknown distributions.
Key idea is to augment the source capacity in both input and label spaces, while the augmentation is guided by uncertainty assessment.
arXiv Detail & Related papers (2021-03-12T21:13:21Z) - Agnostic Proper Learning of Halfspaces under Gaussian Marginals [56.01192577666607]
We study the problem of agnostically learning halfspaces under the Gaussian.
Our main result is the em first proper learning algorithm for this problem.
arXiv Detail & Related papers (2021-02-10T18:40:44Z) - Implicit Regularization in ReLU Networks with the Square Loss [56.70360094597169]
We show that it is impossible to characterize the implicit regularization with the square loss by any explicit function of the model parameters.
Our results suggest that a more general framework may be needed to understand implicit regularization for nonlinear predictors.
arXiv Detail & Related papers (2020-12-09T16:48:03Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.