Toward Universal Laws of Outlier Propagation
- URL: http://arxiv.org/abs/2502.08593v2
- Date: Thu, 13 Feb 2025 18:24:40 GMT
- Title: Toward Universal Laws of Outlier Propagation
- Authors: Aram Ebtekar, Yuhao Wang, Dominik Janzing,
- Abstract summary: We show that the randomness deficiency of the joint state decomposes into randomness deficiencies of each causal mechanism.
As an extension of Levin's law of randomness conservation, we show that weak outliers cannot cause strong ones when Independence of Mechanisms holds.
- Score: 12.474280839142395
- License:
- Abstract: We argue that Algorithmic Information Theory (AIT) admits a principled way to quantify outliers in terms of so-called randomness deficiency. For the probability distribution generated by a causal Bayesian network, we show that the randomness deficiency of the joint state decomposes into randomness deficiencies of each causal mechanism, subject to the Independence of Mechanisms Principle. Accordingly, anomalous joint observations can be quantitatively attributed to their root causes, i.e., the mechanisms that behaved anomalously. As an extension of Levin's law of randomness conservation, we show that weak outliers cannot cause strong ones when Independence of Mechanisms holds. We show how these information theoretic laws provide a better understanding of the behaviour of outliers defined with respect to existing scores.
Related papers
- Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms [17.074858228123706]
We propose a framework for learning causally disentangled representations supervised by causally related observed labels.
We show that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.
arXiv Detail & Related papers (2023-06-02T00:28:48Z) - An entropic uncertainty principle for mixed states [0.0]
We provide a family of generalizations of the entropic uncertainty principle.
Results can be used to certify entanglement between trusted parties, or to bound the entanglement of a system with an untrusted environment.
arXiv Detail & Related papers (2023-03-20T18:31:53Z) - From no causal loop to absoluteness of cause: discarding the quantum NOT
logic [0.0]
AC principle restrains the time order' of two spacelike separated events/processes to be a potential cause of another event in their common future.
A strong form of violation enables instantaneous signaling, whereas a weak form of violation forbids the theory to be locally tomographic.
On the other hand, impossibility of an intermediate violation suffices to discard the universal quantum NOT logic.
arXiv Detail & Related papers (2021-09-21T04:24:25Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Invariance Principle Meets Information Bottleneck for
Out-of-Distribution Generalization [77.24152933825238]
We show that for linear classification tasks we need stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible.
We prove that a form of the information bottleneck constraint along with invariance helps address key failures when invariant features capture all the information about the label and also retains the existing success when they do not.
arXiv Detail & Related papers (2021-06-11T20:42:27Z) - Independent mechanism analysis, a new concept? [3.2548794659022393]
Identifiability can be recovered in settings where additional, typically observed variables are included in the generative process.
We provide theoretical and empirical evidence that our approach circumvents a number of nonidentifiability issues arising in nonlinear blind source separation.
arXiv Detail & Related papers (2021-06-09T16:45:00Z) - Localisation in quasiperiodic chains: a theory based on convergence of
local propagators [68.8204255655161]
We present a theory of localisation in quasiperiodic chains with nearest-neighbour hoppings, based on the convergence of local propagators.
Analysing the convergence of these continued fractions, localisation or its absence can be determined, yielding in turn the critical points and mobility edges.
Results are exemplified by analysing the theory for three quasiperiodic models covering a range of behaviour.
arXiv Detail & Related papers (2021-02-18T16:19:52Z) - A Weaker Faithfulness Assumption based on Triple Interactions [89.59955143854556]
We propose a weaker assumption that we call $2$-adjacency faithfulness.
We propose a sound orientation rule for causal discovery that applies under weaker assumptions.
arXiv Detail & Related papers (2020-10-27T13:04:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.