A Causal Research Pipeline and Tutorial for Psychologists and Social
Scientists
- URL: http://arxiv.org/abs/2206.05175v1
- Date: Fri, 10 Jun 2022 15:11:57 GMT
- Title: A Causal Research Pipeline and Tutorial for Psychologists and Social
Scientists
- Authors: Matthew J. Vowels
- Abstract summary: Causality is a fundamental part of the scientific endeavour to understand the world.
Unfortunately, causality is still taboo in much of psychology and social science.
Motivated by a growing number of recommendations for the importance of adopting causal approaches to research, we reformulate the typical approach to research in psychology to harmonize inevitably causal theories with the rest of the research pipeline.
- Score: 7.106986689736828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causality is a fundamental part of the scientific endeavour to understand the
world. Unfortunately, causality is still taboo in much of psychology and social
science. Motivated by a growing number of recommendations for the importance of
adopting causal approaches to research, we reformulate the typical approach to
research in psychology to harmonize inevitably causal theories with the rest of
the research pipeline. We present a new process which begins with the
incorporation of techniques from the confluence of causal discovery and machine
learning for the development, validation, and transparent formal specification
of theories. We then present methods for reducing the complexity of the fully
specified theoretical model into the fundamental submodel relevant to a given
target hypothesis. From here, we establish whether or not the quantity of
interest is estimable from the data, and if so, propose the use of
semi-parametric machine learning methods for the estimation of causal effects.
The overall goal is the presentation of a new research pipeline which can (a)
facilitate scientific inquiry compatible with the desire to test causal
theories (b) encourage transparent representation of our theories as
unambiguous mathematical objects, (c) to tie our statistical models to specific
attributes of the theory, thus reducing under-specification problems frequently
resulting from the theory-to-model gap, and (d) to yield results and estimates
which are causally meaningful and reproducible. The process is demonstrated
through didactic examples with real-world data, and we conclude with a summary
and discussion of limitations.
Related papers
- Missed Causes and Ambiguous Effects: Counterfactuals Pose Challenges for Interpreting Neural Networks [14.407025310553225]
Interpretability research takes counterfactual theories of causality for granted.
Counterfactual theories have problems that bias our findings in specific and predictable ways.
We discuss the implications of these challenges for interpretability researchers.
arXiv Detail & Related papers (2024-07-05T17:53:03Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Research on Personal Credit Risk Assessment Methods Based on Causal Inference [6.184711584674839]
This paper introduces a new definition of causality using category theory, proposed by Samuel Eilenberg and Saunders Mac Lane in 1945.
Due to the limitations in the development of category theory-related technical tools, this paper adopts the widely-used probabilistic causal graph tool proposed by Judea Pearl in 1995.
arXiv Detail & Related papers (2024-03-17T13:34:45Z) - Deep Learning With DAGs [5.199807441687141]
We introduce causal-graphical normalizing flows (cGNFs) to empirically evaluate theories represented as directed acyclic graphs (DAGs)
Unlike conventional approaches, cGNFs model the full joint distribution of the data according to a DAG supplied by the analyst.
arXiv Detail & Related papers (2024-01-12T19:35:54Z) - Targeted Reduction of Causal Models [55.11778726095353]
Causal Representation Learning offers a promising avenue to uncover interpretable causal patterns in simulations.
We introduce Targeted Causal Reduction (TCR), a method for condensing complex intervenable models into a concise set of causal factors.
Its ability to generate interpretable high-level explanations from complex models is demonstrated on toy and mechanical systems.
arXiv Detail & Related papers (2023-11-30T15:46:22Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Inferring physical laws by artificial intelligence based causal models [3.333770856102642]
We propose a causal learning model of physical principles, which recognizes correlations and brings out casual relationships.
We show that this technique can not only figure out associations among data, but is also able to correctly ascertain the cause-and-effect relations amongst the variables.
arXiv Detail & Related papers (2023-09-08T01:50:32Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - The worst of both worlds: A comparative analysis of errors in learning
from data in psychology and machine learning [17.336655978572583]
Recent concerns that machine learning (ML) may be facing a misdiagnosis and replication crisis suggest that some published claims in ML research cannot be taken at face value.
A deeper understanding of what concerns in research in supervised ML have in common with the replication crisis in experimental science can put the new concerns in perspective.
arXiv Detail & Related papers (2022-03-12T18:26:24Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.