Neural Information Squeezer for Causal Emergence
- URL: http://arxiv.org/abs/2201.10154v1
- Date: Tue, 25 Jan 2022 07:55:06 GMT
- Title: Neural Information Squeezer for Causal Emergence
- Authors: Jiang Zhang
- Abstract summary: This paper proposes a general machine learning framework called Neural Information Squeezer to automatically extract the effective coarse-graining strategy and the macro-state dynamics.
We show how our framework can extract the dynamics on different levels and identify causal emergence from the data on several exampled systems.
- Score: 2.2788045178734726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The classic studies of causal emergence have revealed that in some Markovian
dynamical systems, far stronger causal connections can be found on the
higher-level descriptions than the lower-level of the same systems if we
coarse-grain the system states in an appropriate way. However, identifying this
emergent causality from the data is still a hard problem that has not been
solved because the correct coarse-graining strategy can not be found easily.
This paper proposes a general machine learning framework called Neural
Information Squeezer to automatically extract the effective coarse-graining
strategy and the macro-state dynamics, as well as identify causal emergence
directly from the time series data. By decomposing a coarse-graining operation
into two processes: information conversion and information dropping out, we can
not only exactly control the width of the information channel, but also can
derive some important properties analytically including the exact expression of
the effective information of a macro-dynamics. We also show how our framework
can extract the dynamics on different levels and identify causal emergence from
the data on several exampled systems.
Related papers
- How more data can hurt: Instability and regularization in next-generation reservoir computing [0.0]
We show that a more extreme version of the phenomenon occurs in data-driven models of dynamical systems.
We find that, despite learning a better representation of the flow map with more training data, NGRC can adopt an ill-conditioned integrator'' and lose stability.
arXiv Detail & Related papers (2024-07-11T16:22:13Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - VAC2: Visual Analysis of Combined Causality in Event Sequences [6.145427901944597]
We develop a combined causality visual analysis system to help users explore combined causes as well as an individual cause.
This interactive system supports multi-level causality exploration with diverse ordering strategies and a focus and context technique.
The usefulness and effectiveness of the system are further evaluated by conducting a pilot user study and two case studies on event sequence data.
arXiv Detail & Related papers (2022-06-11T04:53:23Z) - Causal Discovery from Sparse Time-Series Data Using Echo State Network [0.0]
Causal discovery between collections of time-series data can help diagnose causes of symptoms and hopefully prevent faults before they occur.
We propose a new system comprised of two parts, the first part fills missing data with a Gaussian Process Regression, and the second part leverages an Echo State Network.
We report on their corresponding Matthews Correlation Coefficient(MCC) and Receiver Operating Characteristic curves (ROC) and show that the proposed system outperforms existing algorithms.
arXiv Detail & Related papers (2022-01-09T05:55:47Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Understanding and Diagnosing Vulnerability under Adversarial Attacks [62.661498155101654]
Deep Neural Networks (DNNs) are known to be vulnerable to adversarial attacks.
We propose a novel interpretability method, InterpretGAN, to generate explanations for features used for classification in latent variables.
We also design the first diagnostic method to quantify the vulnerability contributed by each layer.
arXiv Detail & Related papers (2020-07-17T01:56:28Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Identifying Causal Structure in Dynamical Systems [6.451261098085498]
We propose a method that identifies the causal structure of control systems.
Experiments on a robot arm demonstrate reliable causal identification from real-world data.
arXiv Detail & Related papers (2020-06-06T16:17:07Z) - Causal Discovery from Incomplete Data: A Deep Learning Approach [21.289342482087267]
Imputated Causal Learning is proposed to perform iterative missing data imputation and causal structure discovery.
We show that ICL can outperform state-of-the-art methods under different missing data mechanisms.
arXiv Detail & Related papers (2020-01-15T14:28:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.