Learning Generalized Causal Structure in Time-series
- URL: http://arxiv.org/abs/2112.03085v1
- Date: Mon, 6 Dec 2021 14:48:13 GMT
- Title: Learning Generalized Causal Structure in Time-series
- Authors: Aditi Kathpalia, Keerti P. Charantimath, Nithin Nagaraj
- Abstract summary: We develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
In this work we develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The science of causality explains/determines 'cause-effect' relationship
between the entities of a system by providing mathematical tools for the
purpose. In spite of all the success and widespread applications of
machine-learning (ML) algorithms, these algorithms are based on statistical
learning alone. Currently, they are nowhere close to 'human-like' intelligence
as they fail to answer and learn based on the important "Why?" questions.
Hence, researchers are attempting to integrate ML with the science of
causality. Among the many causal learning issues encountered by ML, one is that
these algorithms are dumb to the temporal order or structure in data. In this
work we develop a machine learning pipeline based on a recently proposed
'neurochaos' feature learning technique (ChaosFEX feature extractor), that
helps us to learn generalized causal-structure in given time-series data.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning [10.686245134005047]
Causal learning aims to go far beyond conventional machine learning, yet several major challenges remain.
We introduce em CausalBench, a transparent, fair, and easy-to-use evaluation platform.
arXiv Detail & Related papers (2024-09-12T22:45:10Z) - Coupling Machine Learning with Ontology for Robotics Applications [0.0]
The lack of availability of prior knowledge in dynamic scenarios is without doubt a major barrier for scalable machine intelligence.
My view of the interaction between the two tiers intelligence is based on the idea that when knowledge is not readily available at the knowledge base tier, more knowledge can be extracted from the other tier.
arXiv Detail & Related papers (2024-06-08T23:38:03Z) - Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Open problems in causal structure learning: A case study of COVID-19 in
the UK [4.159754744541361]
Causal machine learning (ML) algorithms recover graphical structures that tell us something about cause-and-effect relationships.
This paper investigates the challenges of causal ML with application to COVID-19 UK pandemic data.
arXiv Detail & Related papers (2023-05-05T22:04:00Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Continual Learning with Deep Learning Methods in an Application-Oriented
Context [0.0]
An important research area of Artificial Intelligence (AI) deals with the automatic derivation of knowledge from data.
One type of machine learning algorithms that can be categorized as "deep learning" model is referred to as Deep Neural Networks (DNNs)
DNNs are affected by a problem that prevents new knowledge from being added to an existing base.
arXiv Detail & Related papers (2022-07-12T10:13:33Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Causal Learner: A Toolbox for Causal Structure and Markov Blanket
Learning [16.41685271795219]
Causal Learner is a toolbox for learning causal structure and Markov blanket (MB) from data.
It integrates functions for generating simulated network data, a set of state-of-the-art global causal structure learning algorithms, a set of state-of-the-art local causal structure learning algorithms, and functions for evaluating algorithms.
arXiv Detail & Related papers (2021-03-11T09:10:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.