Causality in Neural Networks -- An Extended Abstract
- URL: http://arxiv.org/abs/2106.05842v1
- Date: Thu, 3 Jun 2021 09:52:36 GMT
- Title: Causality in Neural Networks -- An Extended Abstract
- Authors: Abbavaram Gowtham Reddy
- Abstract summary: Causal reasoning is the main learning and explanation tool used by humans.
Introducing the ideas of causality to machine learning helps in providing better learning and explainable models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal reasoning is the main learning and explanation tool used by humans. AI
systems should possess causal reasoning capabilities to be deployed in the real
world with trust and reliability. Introducing the ideas of causality to machine
learning helps in providing better learning and explainable models.
Explainability, causal disentanglement are some important aspects of any
machine learning model. Causal explanations are required to believe in a
model's decision and causal disentanglement learning is important for transfer
learning applications. We exploit the ideas of causality to be used in deep
learning models to achieve better and causally explainable models that are
useful in fairness, disentangled representation, etc.
Related papers
- Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Why Online Reinforcement Learning is Causal [31.59766909722592]
Reinforcement learning (RL) and causal modelling naturally complement each other.
This paper examines which reinforcement learning settings we can expect to benefit from causal modelling.
arXiv Detail & Related papers (2024-03-07T04:49:48Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Instance-wise or Class-wise? A Tale of Neighbor Shapley for
Concept-based Explanation [37.033629287045784]
Deep neural networks have demonstrated remarkable performance in many data-driven and prediction-oriented applications.
Their most significant drawback is the lack of interpretability, which makes them less attractive in many real-world applications.
arXiv Detail & Related papers (2021-09-03T08:34:37Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Towards Causal Representation Learning [96.110881654479]
The two fields of machine learning and graphical causality arose and developed separately.
There is now cross-pollination and increasing interest in both fields to benefit from the advances of the other.
arXiv Detail & Related papers (2021-02-22T15:26:57Z) - Abduction and Argumentation for Explainable Machine Learning: A Position
Survey [2.28438857884398]
This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
arXiv Detail & Related papers (2020-10-24T13:23:44Z) - Social Commonsense Reasoning with Multi-Head Knowledge Attention [24.70946979449572]
Social Commonsense Reasoning requires understanding of text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills.
We propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell.
arXiv Detail & Related papers (2020-10-12T10:24:40Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - CausaLM: Causal Model Explanation Through Counterfactual Language Models [33.29636213961804]
CausaLM is a framework for producing causal model explanations using counterfactual language representation models.
We show that language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest.
A byproduct of our method is a language representation model that is unaffected by the tested concept.
arXiv Detail & Related papers (2020-05-27T15:06:35Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.