"Cause" is Mechanistic Narrative within Scientific Domains: An Ordinary Language Philosophical Critique of "Causal Machine Learning"
- URL: http://arxiv.org/abs/2501.05844v2
- Date: Sun, 02 Feb 2025 10:56:42 GMT
- Title: "Cause" is Mechanistic Narrative within Scientific Domains: An Ordinary Language Philosophical Critique of "Causal Machine Learning"
- Authors: Vyacheslav Kungurtsev, Leonardo Christov Moore, Gustav Sir, Martin Krutsky,
- Abstract summary: Causal Learning has emerged as a major theme of research in statistics and machine learning.<n>In this paper we consider recognizing true cause and effect phenomena.
- Score: 2.5782973781085383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal Learning has emerged as a major theme of research in statistics and machine learning in recent years, promising specific computational techniques to apply to datasets that reveal the true nature of cause and effect in a number of important domains. In this paper we consider the epistemology of recognizing true cause and effect phenomena. We apply the Ordinary Language method of engaging on the customary use of the word 'cause' to investigate valid semantics of reasoning about cause and effect. We recognize that the grammars of cause and effect are fundamentally distinct in form across scientific domains, yet they maintain a consistent and central function. This function can best be described as the mechanism underlying fundamental forces of influence as considered prominent in the respective scientific domain. We demarcate 1) physics and engineering as domains wherein mathematical models are sufficient to comprehensively describe causality, 2) biology as introducing challenges of emergence while providing opportunities for showing consistent mechanisms across scale, and 3) the social sciences as introducing grander difficulties for establishing models of low prediction error but providing, through Hermeneutics, the potential for findings that are still instrumentally useful to individuals. We posit that definitive causal claims regarding a given phenomenon (writ large) can only come through an agglomeration of consistent evidence across multiple domains. This presents important methodological questions as far as harmonizing between language games and emergence across scales. Given the role of epistemic hubris in the contemporary crisis of credibility in the sciences, exercising greater caution as far as communicating precision as to the real degree of certainty certain evidence provides for rich collections of open problems in optimizing integration of different findings.
Related papers
- Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.
Models may behave unreliably due to poorly explored failure modes.
causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery [141.39722070734737]
We propose to enhance the knowledge-driven, abstract reasoning abilities of Large Language Models with the computational strength of simulations.
We introduce Scientific Generative Agent (SGA), a bilevel optimization framework.
We conduct experiments to demonstrate our framework's efficacy in law discovery and molecular design.
arXiv Detail & Related papers (2024-05-16T03:04:10Z) - Skews in the Phenomenon Space Hinder Generalization in Text-to-Image Generation [59.138470433237615]
We introduce statistical metrics that quantify both the linguistic and visual skew of a dataset for relational learning.
We show that systematically controlled metrics are strongly predictive of generalization performance.
This work informs an important direction towards quality-enhancing the data diversity or balance to scaling up the absolute size.
arXiv Detail & Related papers (2024-03-25T03:18:39Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Emergence and Causality in Complex Systems: A Survey on Causal Emergence
and Related Quantitative Studies [12.78006421209864]
Causal emergence theory employs measures of causality to quantify emergence.
Two key problems are addressed: quantifying causal emergence and identifying it in data.
We highlighted that the architectures used for identifying causal emergence are shared by causal representation learning, causal model abstraction, and world model-based reinforcement learning.
arXiv Detail & Related papers (2023-12-28T04:20:46Z) - Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [268.585904751315]
New area of research known as AI for science (AI4Science)
Areas aim at understanding the physical world from subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales.
Key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods.
arXiv Detail & Related papers (2023-07-17T12:14:14Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - Discovering Causal Relations and Equations from Data [23.802778299505288]
This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics.
We provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies.
Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.
arXiv Detail & Related papers (2023-05-21T19:22:50Z) - Causal Deep Learning [77.49632479298745]
Causality has the potential to transform the way we solve real-world problems.
But causality often requires crucial assumptions which cannot be tested in practice.
We propose a new way of thinking about causality -- we call this causal deep learning.
arXiv Detail & Related papers (2023-03-03T19:19:18Z) - Causality, Causal Discovery, and Causal Inference in Structural
Engineering [1.827510863075184]
This paper builds a case for causal discovery and causal inference from a civil and structural engineering perspective.
More specifically, this paper outlines the key principles of causality and the most commonly used algorithms and packages for causal discovery and causal inference.
arXiv Detail & Related papers (2022-04-04T14:49:47Z) - AI Research Associate for Early-Stage Scientific Discovery [1.6861004263551447]
Artificial intelligence (AI) has been increasingly applied in scientific activities for decades.
We present an AI research associate for early-stage scientific discovery based on a novel minimally-biased physics-based modeling.
arXiv Detail & Related papers (2022-02-02T17:05:52Z) - Expressing High-Level Scientific Claims with Formal Semantics [0.8258451067861932]
We analyze the main claims from a sample of scientific articles from all disciplines.
We find that their semantics are more complex than what a straight-forward application of formalisms like RDF or OWL account for.
We show here how the instantiation of the five slots of this super-pattern leads to a strictly defined statement in higher-order logic.
arXiv Detail & Related papers (2021-09-27T09:52:49Z) - An ontology for the formalization and visualization of scientific
knowledge [0.0]
We present the first version built from ontological sources (ontologies of knowledge objects of certain fields, lexical and higher level ones), specialized knowledge bases and interviews with scientists.
The validation consists in using it to formalize knowledge from various sources, which we have begun to do in the field of physics.
arXiv Detail & Related papers (2021-07-09T10:33:45Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Scientific intuition inspired by machine learning generated hypotheses [2.294014185517203]
We shift the focus on the insights and the knowledge obtained by the machine learning models themselves.
We apply gradient boosting in decision trees to extract human interpretable insights from big data sets from chemistry and physics.
The ability to go beyond numerics opens the door to use machine learning to accelerate the discovery of conceptual understanding.
arXiv Detail & Related papers (2020-10-27T12:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.