Comprehensive Review and Empirical Evaluation of Causal Discovery Algorithms for Numerical Data
- URL: http://arxiv.org/abs/2407.13054v1
- Date: Wed, 17 Jul 2024 23:47:05 GMT
- Title: Comprehensive Review and Empirical Evaluation of Causal Discovery Algorithms for Numerical Data
- Authors: Wenjin Niu, Zijun Gao, Liyan Song, Lingbo Li,
- Abstract summary: Causal analysis has become an essential component in understanding the underlying causes of phenomena across various fields.
The existing literature on causal discovery algorithms is fragmented, with inconsistent methodologies and a lack of comprehensive evaluations.
This study addresses these gaps by conducting an exhaustive review and empirical evaluation of causal discovery methods for numerical data.
- Score: 3.9523536371670045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal analysis has become an essential component in understanding the underlying causes of phenomena across various fields. Despite its significance, the existing literature on causal discovery algorithms is fragmented, with inconsistent methodologies and a lack of comprehensive evaluations. This study addresses these gaps by conducting an exhaustive review and empirical evaluation of causal discovery methods for numerical data, aiming to provide a clearer and more structured understanding of the field. Our research began with a comprehensive literature review spanning over a decade, revealing that existing surveys fall short in covering the vast array of causal discovery advancements. We meticulously analyzed over 200 scholarly articles to identify 24 distinct algorithms. This extensive analysis led to the development of a novel taxonomy tailored to the complexities of causal discovery, categorizing methods into six main types. Addressing the lack of comprehensive evaluations, our study conducts an extensive empirical assessment of more than 20 causal discovery algorithms on synthetic and real-world datasets. We categorize synthetic datasets based on size, linearity, and noise distribution, employing 5 evaluation metrics, and summarized the top-3 algorithm recommendations for different data scenarios. The recommendations have been validated on 2 real-world datasets. Our results highlight the significant impact of dataset characteristics on algorithm performance. Moreover, a metadata extraction strategy was developed to assist users in algorithm selection on unknown datasets. The accuracy of estimating metadata is higher than 80%. Based on these insights, we offer professional and practical recommendations to help users choose the most suitable causal discovery methods for their specific dataset needs.
Related papers
- A Review of Global Sensitivity Analysis Methods and a comparative case study on Digit Classification [5.458813674116228]
Global sensitivity analysis (GSA) aims to detect influential input factors that lead to a model to arrive at a certain decision.
We provide a comprehensive review and a comparison on global sensitivity analysis methods.
arXiv Detail & Related papers (2024-06-23T00:38:19Z) - Diverse Community Data for Benchmarking Data Privacy Algorithms [0.2999888908665658]
The Collaborative Research Cycle (CRC) is a National Institute of Standards and Technology (NIST) benchmarking program.
Deidentification algorithms are vulnerable to the same bias and privacy issues that impact other data analytics and machine learning applications.
This paper summarizes four CRC contributions on the relationship between diverse populations and challenges for equitable deidentification.
arXiv Detail & Related papers (2023-06-20T17:18:51Z) - A Survey on Causal Discovery Methods for I.I.D. and Time Series Data [4.57769506869942]
Causal Discovery (CD) algorithms can identify the cause-effect relationships among the variables of a system from related observational data.
We present an extensive discussion on the methods designed to perform causal discovery from both independent and identically distributed (I.I.D.) data and time series data.
arXiv Detail & Related papers (2023-03-27T09:21:41Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - TensorAnalyzer: Identification of Urban Patterns in Big Cities using
Non-Negative Tensor Factorization [8.881421521529198]
This paper presents a new approach to detecting the most relevant urban patterns from multiple data sources based on tensor decomposition.
We developed a generic framework namedAnalyzer, where the effectiveness and usefulness of the proposed methodology are tested.
arXiv Detail & Related papers (2022-10-06T01:04:02Z) - Detection and Evaluation of Clusters within Sequential Data [58.720142291102135]
Clustering algorithms for Block Markov Chains possess theoretical optimality guarantees.
In particular, our sequential data is derived from human DNA, written text, animal movement data and financial markets.
It is found that the Block Markov Chain model assumption can indeed produce meaningful insights in exploratory data analyses.
arXiv Detail & Related papers (2022-10-04T15:22:39Z) - Research Trends and Applications of Data Augmentation Algorithms [77.34726150561087]
We identify the main areas of application of data augmentation algorithms, the types of algorithms used, significant research trends, their progression over time and research gaps in data augmentation literature.
We expect readers to understand the potential of data augmentation, as well as identify future research directions and open questions within data augmentation research.
arXiv Detail & Related papers (2022-07-18T11:38:32Z) - Selecting the suitable resampling strategy for imbalanced data
classification regarding dataset properties [62.997667081978825]
In many application domains such as medicine, information retrieval, cybersecurity, social media, etc., datasets used for inducing classification models often have an unequal distribution of the instances of each class.
This situation, known as imbalanced data classification, causes low predictive performance for the minority class examples.
Oversampling and undersampling techniques are well-known strategies to deal with this problem by balancing the number of examples of each class.
arXiv Detail & Related papers (2021-12-15T18:56:39Z) - A review of systematic selection of clustering algorithms and their
evaluation [0.0]
This paper aims to identify a systematic selection logic for clustering algorithms and corresponding validation concepts.
The goal is to enable potential users to choose an algorithm that fits best to their needs and the properties of their underlying data clustering problem.
arXiv Detail & Related papers (2021-06-24T07:01:46Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural
Summarization Systems [121.78477833009671]
We investigate the performance of different summarization models under a cross-dataset setting.
A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways.
arXiv Detail & Related papers (2020-10-11T02:19:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.