Causal Learner: A Toolbox for Causal Structure and Markov Blanket
Learning
- URL: http://arxiv.org/abs/2103.06544v1
- Date: Thu, 11 Mar 2021 09:10:55 GMT
- Title: Causal Learner: A Toolbox for Causal Structure and Markov Blanket
Learning
- Authors: Zhaolong Ling, Kui Yu, Yiwen Zhang, Lin Liu, and Jiuyong Li
- Abstract summary: Causal Learner is a toolbox for learning causal structure and Markov blanket (MB) from data.
It integrates functions for generating simulated network data, a set of state-of-the-art global causal structure learning algorithms, a set of state-of-the-art local causal structure learning algorithms, and functions for evaluating algorithms.
- Score: 16.41685271795219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal Learner is a toolbox for learning causal structure and Markov blanket
(MB) from data. It integrates functions for generating simulated Bayesian
network data, a set of state-of-the-art global causal structure learning
algorithms, a set of state-of-the-art local causal structure learning
algorithms, a set of state-of-the-art MB learning algorithms, and functions for
evaluating algorithms. The data generation part of Causal Learner is written in
R, and the rest of Causal Learner is written in MATLAB. Causal Learner aims to
provide researchers and practitioners with an open-source platform for causal
learning from data and for the development and evaluation of new causal
learning algorithms. The Causal Learner project is available at
http://bigdata.ahu.edu.cn/causal-learner.
Related papers
- Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - Causal-learn: Causal Discovery in Python [53.17423883919072]
Causal discovery aims at revealing causal relations from observational data.
$textitcausal-learn$ is an open-source Python library for causal discovery.
arXiv Detail & Related papers (2023-07-31T05:00:35Z) - Language models are weak learners [71.33837923104808]
We show that prompt-based large language models can operate effectively as weak learners.
We incorporate these models into a boosting approach, which can leverage the knowledge within the model to outperform traditional tree-based boosting.
Results illustrate the potential for prompt-based LLMs to function not just as few-shot learners themselves, but as components of larger machine learning pipelines.
arXiv Detail & Related papers (2023-06-25T02:39:19Z) - Open problems in causal structure learning: A case study of COVID-19 in
the UK [4.159754744541361]
Causal machine learning (ML) algorithms recover graphical structures that tell us something about cause-and-effect relationships.
This paper investigates the challenges of causal ML with application to COVID-19 UK pandemic data.
arXiv Detail & Related papers (2023-05-05T22:04:00Z) - Navigating causal deep learning [78.572170629379]
Causal deep learning (CDL) is a new and important research area in the larger field of machine learning.
This paper categorises methods in causal deep learning beyond Pearl's ladder of causation.
Our paradigm is a tool which helps researchers to: find benchmarks, compare methods, and most importantly: identify research gaps.
arXiv Detail & Related papers (2022-12-01T23:44:23Z) - Impact Learning: A Learning Method from Features Impact and Competition [1.3569491184708429]
This paper introduced a new machine learning algorithm called impact learning.
Impact learning is a supervised learning algorithm that can be consolidated in both classification and regression problems.
It is prepared by the impacts of the highlights from the intrinsic rate of natural increase.
arXiv Detail & Related papers (2022-11-04T04:56:35Z) - Learning Bayesian Networks in the Presence of Structural Side
Information [22.734574764075226]
We study the problem of learning a Bayesian network (BN) of a set of variables when structural side information about the system is available.
We develop an algorithm that efficiently incorporates such knowledge into the learning process.
As a consequence of our work, we show that bounded treewidth BNs can be learned with complexity.
arXiv Detail & Related papers (2021-12-20T22:14:19Z) - Learning Generalized Causal Structure in Time-series [0.0]
We develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
In this work we develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
arXiv Detail & Related papers (2021-12-06T14:48:13Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - An Approach to Evaluating Learning Algorithms for Decision Trees [3.7798600249187295]
Low or unknown learning ability algorithms does not permit us to trust the produced software models.
We propose a novel oracle-centered approach to evaluate (the learning ability of) learning algorithms for decision trees.
arXiv Detail & Related papers (2020-10-26T15:36:59Z) - dMelodies: A Music Dataset for Disentanglement Learning [70.90415511736089]
We present a new symbolic music dataset that will help researchers demonstrate the efficacy of their algorithms on diverse domains.
This will also provide a means for evaluating algorithms specifically designed for music.
The dataset is large enough (approx. 1.3 million data points) to train and test deep networks for disentanglement learning.
arXiv Detail & Related papers (2020-07-29T19:20:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.