Cause-Effect Preservation and Classification using Neurochaos Learning
- URL: http://arxiv.org/abs/2201.12181v1
- Date: Fri, 28 Jan 2022 15:26:35 GMT
- Title: Cause-Effect Preservation and Classification using Neurochaos Learning
- Authors: Harikrishnan N B, Aditi Kathpalia, Nithin Nagaraj
- Abstract summary: A recently proposed brain inspired learning algorithm namely-emphNeurochaos Learning (NL) is used for the classification of cause-effect from simulated data.
The data instances used are generated from coupled AR processes, coupled 1D chaotic skew tent maps, coupled 1D chaotic logistic maps and a real-world prey-predator system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovering cause-effect from observational data is an important but
challenging problem in science and engineering. In this work, a recently
proposed brain inspired learning algorithm namely-\emph{Neurochaos Learning}
(NL) is used for the classification of cause-effect from simulated data. The
data instances used are generated from coupled AR processes, coupled 1D chaotic
skew tent maps, coupled 1D chaotic logistic maps and a real-world prey-predator
system. The proposed method consistently outperforms a five layer Deep Neural
Network architecture for coupling coefficient values ranging from $0.1$ to
$0.7$. Further, we investigate the preservation of causality in the feature
extracted space of NL using Granger Causality (GC) for coupled AR processes and
and Compression-Complexity Causality (CCC) for coupled chaotic systems and
real-world prey-predator dataset. This ability of NL to preserve causality
under a chaotic transformation and successfully classify cause and effect time
series (including a transfer learning scenario) is highly desirable in causal
machine learning applications.
Related papers
- CUTS+: High-dimensional Causal Discovery from Irregular Time-series [13.84185941100574]
We propose CUTS+, which is built on the Granger-causality-based causal discovery method CUTS.
We show that CUTS+ largely improves the causal discovery performance on high-dimensional data with different types of irregular sampling.
arXiv Detail & Related papers (2023-05-10T04:20:36Z) - CUTS: Neural Causal Discovery from Irregular Time-Series Data [27.06531262632836]
Causal discovery from time-series data has been a central task in machine learning.
We present CUTS, a neural Granger causal discovery algorithm to jointly impute unobserved data points and build causal graphs.
Our approach constitutes a promising step towards applying causal discovery to real applications with non-ideal observations.
arXiv Detail & Related papers (2023-02-15T04:16:34Z) - Learning Latent Structural Causal Models [31.686049664958457]
In machine learning tasks, one often operates on low-level data like image pixels or high-dimensional vectors.
We present a tractable approximate inference method which performs joint inference over the causal variables, structure and parameters of the latent Structural Causal Model.
arXiv Detail & Related papers (2022-10-24T20:09:44Z) - Federated Causal Discovery [74.37739054932733]
This paper develops a gradient-based learning framework named DAG-Shared Federated Causal Discovery (DS-FCD)
It can learn the causal graph without directly touching local data and naturally handle the data heterogeneity.
Extensive experiments on both synthetic and real-world datasets verify the efficacy of the proposed method.
arXiv Detail & Related papers (2021-12-07T08:04:12Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.