Algorithmic causal structure emerging through compression
- URL: http://arxiv.org/abs/2502.04210v2
- Date: Tue, 18 Feb 2025 09:45:16 GMT
- Title: Algorithmic causal structure emerging through compression
- Authors: Liang Wendong, Simon Buchholz, Bernhard Schölkopf,
- Abstract summary: We explore the relationship between causality, symmetry, and compression.
We build on and generalize the known connection between learning and compression to a setting where causal models are not identifiable.
We define algorithmic causality as an alternative definition of causality when traditional assumptions for causal identifiability do not hold.
- Score: 53.52699766206808
- License:
- Abstract: We explore the relationship between causality, symmetry, and compression. We build on and generalize the known connection between learning and compression to a setting where causal models are not identifiable. We propose a framework where causality emerges as a consequence of compressing data across multiple environments. We define algorithmic causality as an alternative definition of causality when traditional assumptions for causal identifiability do not hold. We demonstrate how algorithmic causal and symmetric structures can emerge from minimizing upper bounds on Kolmogorov complexity, without knowledge of intervention targets. We hypothesize that these insights may also provide a novel perspective on the emergence of causality in machine learning models, such as large language models, where causal relationships may not be explicitly identifiable.
Related papers
- Emergence and Causality in Complex Systems: A Survey on Causal Emergence
and Related Quantitative Studies [12.78006421209864]
Causal emergence theory employs measures of causality to quantify emergence.
Two key problems are addressed: quantifying causal emergence and identifying it in data.
We highlighted that the architectures used for identifying causal emergence are shared by causal representation learning, causal model abstraction, and world model-based reinforcement learning.
arXiv Detail & Related papers (2023-12-28T04:20:46Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Effect Identification in Cluster Causal Diagrams [51.42809552422494]
We introduce a new type of graphical model called cluster causal diagrams (for short, C-DAGs)
C-DAGs allow for the partial specification of relationships among variables based on limited prior knowledge.
We develop the foundations and machinery for valid causal inferences over C-DAGs.
arXiv Detail & Related papers (2022-02-22T21:27:31Z) - A general framework for cyclic and fine-tuned causal models and their
compatibility with space-time [2.0305676256390934]
Causal modelling is a tool for generating causal explanations of observed correlations.
Existing frameworks for quantum causality tend to focus on acyclic causal structures that are not fine-tuned.
Cyclist causal models can be used to model physical processes involving feedback.
Cyclist causal models may also be relevant in exotic solutions of general relativity.
arXiv Detail & Related papers (2021-09-24T18:00:08Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Causal Discovery in Knowledge Graphs by Exploiting Asymmetric Properties
of Non-Gaussian Distributions [3.1981440103815717]
We define a hybrid approach that allows us to discover cause-effect relationships in Knowledge Graphs.
The proposed approach is based around the finding of the instantaneous causal structure of a non-experimental matrix using a non-Gaussian model.
We use two different pre-existing algorithms, one for the causal discovery and the other for decomposing the Knowledge Graph.
arXiv Detail & Related papers (2021-06-02T09:33:05Z) - Bayesian Model Averaging for Data Driven Decision Making when Causality
is Partially Known [0.0]
We use ensemble methods like Bayesian Model Averaging (BMA) to infer set of causal graphs.
We provide decisions by computing the expected value and risk of potential interventions explicitly.
arXiv Detail & Related papers (2021-05-12T01:55:45Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - A Critical View of the Structural Causal Model [89.43277111586258]
We show that one can identify the cause and the effect without considering their interaction at all.
We propose a new adversarial training method that mimics the disentangled structure of the causal model.
Our multidimensional method outperforms the literature methods on both synthetic and real world datasets.
arXiv Detail & Related papers (2020-02-23T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.