Inferring physical laws by artificial intelligence based causal models
- URL: http://arxiv.org/abs/2309.04069v2
- Date: Thu, 9 Nov 2023 12:05:53 GMT
- Title: Inferring physical laws by artificial intelligence based causal models
- Authors: Jorawar Singh and Kishor Bharti and Arvind
- Abstract summary: We propose a causal learning model of physical principles, which recognizes correlations and brings out casual relationships.
We show that this technique can not only figure out associations among data, but is also able to correctly ascertain the cause-and-effect relations amongst the variables.
- Score: 3.333770856102642
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advances in Artificial Intelligence (AI) and Machine Learning (ML) have
opened up many avenues for scientific research, and are adding new dimensions
to the process of knowledge creation. However, even the most powerful and
versatile of ML applications till date are primarily in the domain of analysis
of associations and boil down to complex data fitting. Judea Pearl has pointed
out that Artificial General Intelligence must involve interventions involving
the acts of doing and imagining. Any machine assisted scientific discovery thus
must include casual analysis and interventions. In this context, we propose a
causal learning model of physical principles, which not only recognizes
correlations but also brings out casual relationships. We use the principles of
causal inference and interventions to study the cause-and-effect relationships
in the context of some well-known physical phenomena. We show that this
technique can not only figure out associations among data, but is also able to
correctly ascertain the cause-and-effect relations amongst the variables,
thereby strengthening (or weakening) our confidence in the proposed model of
the underlying physical process.
Related papers
- Position: Stop Making Unscientific AGI Performance Claims [6.343515088115924]
Developments in the field of Artificial Intelligence (AI) have created a 'perfect storm' for observing'sparks' of Artificial General Intelligence (AGI)
We argue and empirically demonstrate that the finding of meaningful patterns in latent spaces of models cannot be seen as evidence in favor of AGI.
We conclude that both the methodological setup and common public image of AI are ideal for the misinterpretation that correlations between model representations and some variables of interest are 'caused' by the model's understanding of underlying 'ground truth' relationships.
arXiv Detail & Related papers (2024-02-06T12:42:21Z) - Causal machine learning for single-cell genomics [94.28105176231739]
We discuss the application of machine learning techniques to single-cell genomics and their challenges.
We first present the model that underlies most of current causal approaches to single-cell biology.
We then identify open problems in the application of causal approaches to single-cell data.
arXiv Detail & Related papers (2023-10-23T13:35:24Z) - Causal reasoning in typical computer vision tasks [11.95181390654463]
Causal theory models the intrinsic causal structure unaffected by data bias and is effective in avoiding spurious correlations.
This paper aims to comprehensively review the existing causal methods in typical vision and vision-language tasks such as semantic segmentation, object detection, and image captioning.
Future roadmaps are also proposed, including facilitating the development of causal theory and its application in other complex scenes and systems.
arXiv Detail & Related papers (2023-07-26T07:01:57Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - Balancing Explainability-Accuracy of Complex Models [8.402048778245165]
We introduce a new approach for complex models based on the co-relation impact.
We propose approaches for both scenarios of independent features and dependent features.
We provide an upper bound of the complexity of our proposed approach for the dependent features.
arXiv Detail & Related papers (2023-05-23T14:20:38Z) - A Causal Research Pipeline and Tutorial for Psychologists and Social
Scientists [7.106986689736828]
Causality is a fundamental part of the scientific endeavour to understand the world.
Unfortunately, causality is still taboo in much of psychology and social science.
Motivated by a growing number of recommendations for the importance of adopting causal approaches to research, we reformulate the typical approach to research in psychology to harmonize inevitably causal theories with the rest of the research pipeline.
arXiv Detail & Related papers (2022-06-10T15:11:57Z) - AI Research Associate for Early-Stage Scientific Discovery [1.6861004263551447]
Artificial intelligence (AI) has been increasingly applied in scientific activities for decades.
We present an AI research associate for early-stage scientific discovery based on a novel minimally-biased physics-based modeling.
arXiv Detail & Related papers (2022-02-02T17:05:52Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.