Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning
- URL: http://arxiv.org/abs/2409.08419v2
- Date: Tue, 24 Sep 2024 23:16:02 GMT
- Title: Introducing CausalBench: A Flexible Benchmark Framework for Causal Analysis and Machine Learning
- Authors: Ahmet Kapkiç, Pratanu Mandal, Shu Wan, Paras Sheth, Abhinav Gorantla, Yoonhyuk Choi, Huan Liu, K. Selçuk Candan,
- Abstract summary: Causal learning aims to go far beyond conventional machine learning, yet several major challenges remain.
We introduce em CausalBench, a transparent, fair, and easy-to-use evaluation platform.
- Score: 10.686245134005047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While witnessing the exceptional success of machine learning (ML) technologies in many applications, users are starting to notice a critical shortcoming of ML: correlation is a poor substitute for causation. The conventional way to discover causal relationships is to use randomized controlled experiments (RCT); in many situations, however, these are impractical or sometimes unethical. Causal learning from observational data offers a promising alternative. While being relatively recent, causal learning aims to go far beyond conventional machine learning, yet several major challenges remain. Unfortunately, advances are hampered due to the lack of unified benchmark datasets, algorithms, metrics, and evaluation service interfaces for causal learning. In this paper, we introduce {\em CausalBench}, a transparent, fair, and easy-to-use evaluation platform, aiming to (a) enable the advancement of research in causal learning by facilitating scientific collaboration in novel algorithms, datasets, and metrics and (b) promote scientific objectivity, reproducibility, fairness, and awareness of bias in causal learning research. CausalBench provides services for benchmarking data, algorithms, models, and metrics, impacting the needs of a broad of scientific and engineering disciplines.
Related papers
- Accurate Forgetting for Heterogeneous Federated Continual Learning [89.08735771893608]
We propose a new concept accurate forgetting (AF) and develop a novel generative-replay methodMethodwhich selectively utilizes previous knowledge in federated networks.
We employ a probabilistic framework based on a normalizing flow model to quantify the credibility of previous knowledge.
arXiv Detail & Related papers (2025-02-20T02:35:17Z) - The Return of Pseudosciences in Artificial Intelligence: Have Machine Learning and Deep Learning Forgotten Lessons from Statistics and History? [0.304585143845864]
We argue that the designers and final users of these ML methods have forgotten a fundamental lesson from statistics.
We argue that current efforts to make AI models more ethical by merely reducing biases in the training data are insufficient.
arXiv Detail & Related papers (2024-11-27T08:23:23Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Evaluation Methods and Measures for Causal Learning Algorithms [33.07234268724662]
We focus on the two fundamental causal-inference tasks and causality-aware machine learning tasks.
The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data.
arXiv Detail & Related papers (2022-02-07T00:24:34Z) - Learning Generalized Causal Structure in Time-series [0.0]
We develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
In this work we develop a machine learning pipeline based on a recently proposed 'neurochaos' feature learning technique (ChaosFEX feature extractor)
arXiv Detail & Related papers (2021-12-06T14:48:13Z) - A Reflection on Learning from Data: Epistemology Issues and Limitations [1.8047694351309205]
This paper reflects on some issues and some limitations of the knowledge discovered in data.
The paper sheds some light on the shortcomings of using generic mathematical theories to describe the process.
It further highlights the need for theories specialized in learning from data.
arXiv Detail & Related papers (2021-07-28T11:05:34Z) - A critical look at the current train/test split in machine learning [6.475859946760842]
We take a closer look at the split protocol itself and point out its weakness and limitation.
In many real-world problems, we must acknowledge that there are numerous situations where assumption (ii) does not hold.
We propose a new adaptive active learning architecture (AAL) which involves an adaptation policy.
arXiv Detail & Related papers (2021-06-08T17:07:20Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.