On Geometry of Information Flow for Causal Inference
- URL: http://arxiv.org/abs/2002.02078v2
- Date: Mon, 30 Mar 2020 16:53:45 GMT
- Title: On Geometry of Information Flow for Causal Inference
- Authors: Sudam Surasinghe and Erik M. Bollt
- Abstract summary: This paper takes the perspective of information flow, which includes the Nobel prize winning work on Granger-causality.
Our main contribution will be to develop analysis tools that will allow a geometric interpretation of information flow as a causal inference indicated by transfer entropy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference is perhaps one of the most fundamental concepts in science,
beginning originally from the works of some of the ancient philosophers,
through today, but also weaved strongly in current work from statisticians,
machine learning experts, and scientists from many other fields. This paper
takes the perspective of information flow, which includes the Nobel prize
winning work on Granger-causality, and the recently highly popular transfer
entropy, these being probabilistic in nature. Our main contribution will be to
develop analysis tools that will allow a geometric interpretation of
information flow as a causal inference indicated by positive transfer entropy.
We will describe the effective dimensionality of an underlying manifold as
projected into the outcome space that summarizes information flow. Therefore
contrasting the probabilistic and geometric perspectives, we will introduce a
new measure of causal inference based on the fractal correlation dimension
conditionally applied to competing explanations of future forecasts, which we
will write $GeoC_{y\rightarrow x}$. This avoids some of the boundedness issues
that we show exist for the transfer entropy, $T_{y\rightarrow x}$. We will
highlight our discussions with data developed from synthetic models of
successively more complex nature: then include the H\'{e}non map example, and
finally a real physiological example relating breathing and heart rate
function.
Keywords: Causal Inference; Transfer Entropy; Differential Entropy;
Correlation Dimension; Pinsker's Inequality; Frobenius-Perron operator.
Related papers
- Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Fundamental Properties of Causal Entropy and Information Gain [0.22252684361733285]
Recent developments enable the quantification of causal control given a structural causal model (SCM)
Measures, named causal entropy and causal information gain, aim to address limitations in existing information theoretical approaches for machine learning tasks where causality plays a crucial role.
arXiv Detail & Related papers (2024-02-02T11:55:57Z) - Optimal Inflationary Potentials [0.0]
Inflation is a favoured theory for the early Universe.
It is highly under-determined with a large number of candidate implementations.
We use a new method in symbolic regression to generate all possible simple scalar field potentials.
arXiv Detail & Related papers (2023-10-25T17:20:19Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Uncertainty relations in terms of generalized entropies derived from
information diagrams [0.0]
Inequalities between entropies and the index of coincidence form a long-standing direction of researches in classical information theory.
This paper is devoted to entropic uncertainty relations derived from information diagrams.
arXiv Detail & Related papers (2023-05-29T10:41:28Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Aspects of quantum information in finite density field theory [0.0]
We study different aspects of quantum field theory at finite density using methods from quantum information theory.
For simplicity we focus on massive Dirac fermions with nonzero chemical potential, and work in $1+1$ space-time dimensions.
arXiv Detail & Related papers (2020-11-02T19:00:26Z) - Hyperbolic Graph Embedding with Enhanced Semi-Implicit Variational
Inference [48.63194907060615]
We build off of semi-implicit graph variational auto-encoders to capture higher-order statistics in a low-dimensional graph latent representation.
We incorporate hyperbolic geometry in the latent space through a Poincare embedding to efficiently represent graphs exhibiting hierarchical structure.
arXiv Detail & Related papers (2020-10-31T05:48:34Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z) - Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory [110.99247009159726]
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise.
arXiv Detail & Related papers (2020-06-08T17:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.