Causal Feature Selection for Responsible Machine Learning
- URL: http://arxiv.org/abs/2402.02696v1
- Date: Mon, 5 Feb 2024 03:20:28 GMT
- Title: Causal Feature Selection for Responsible Machine Learning
- Authors: Raha Moraffah, Paras Sheth, Saketh Vishnubhatla, and Huan Liu
- Abstract summary: The need for responsible machine learning has emerged, focusing on aligning ML models to ethical and social values.
This survey addresses four main issues: interpretability, fairness, adversarial generalization, and domain robustness.
- Score: 14.082894268627124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) has become an integral aspect of many real-world
applications. As a result, the need for responsible machine learning has
emerged, focusing on aligning ML models to ethical and social values, while
enhancing their reliability and trustworthiness. Responsible ML involves many
issues. This survey addresses four main issues: interpretability, fairness,
adversarial robustness, and domain generalization. Feature selection plays a
pivotal role in the responsible ML tasks. However, building upon statistical
correlations between variables can lead to spurious patterns with biases and
compromised performance. This survey focuses on the current study of causal
feature selection: what it is and how it can reinforce the four aspects of
responsible ML. By identifying features with causal impacts on outcomes and
distinguishing causality from correlation, causal feature selection is posited
as a unique approach to ensuring ML models to be ethically and socially
responsible in high-stakes applications.
Related papers
- Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) can estimate causal effects under interventions on different parts of a system.
We conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - A Unified Causal View of Instruction Tuning [76.1000380429553]
We develop a meta Structural Causal Model (meta-SCM) to integrate different NLP tasks under a single causal structure of the data.
Key idea is to learn task-required causal factors and only use those to make predictions for a given task.
arXiv Detail & Related papers (2024-02-09T07:12:56Z) - Advancing a Model of Students' Intentional Persistence in Machine
Learning and Artificial Intelligence [0.9217021281095907]
The persistence of diverse populations has been studied in engineering.
Short-term intentional persistence is associated with academic enrollment factors such as major and level of study.
Long-term intentional persistence is correlated with measures of professional role confidence.
arXiv Detail & Related papers (2023-10-30T19:57:40Z) - Detection and Evaluation of bias-inducing Features in Machine learning [14.045499740240823]
In the context of machine learning (ML), one can use cause-to-effect analysis to understand the reason for the biased behavior of the system.
We propose an approach for systematically identifying all bias-inducing features of a model to help support the decision-making of domain experts.
arXiv Detail & Related papers (2023-10-19T15:01:16Z) - Cross Feature Selection to Eliminate Spurious Interactions and Single
Feature Dominance Explainable Boosting Machines [0.0]
Interpretability is essential for legal, ethical, and practical reasons.
High-performance models can suffer from spurious interactions with redundant features and single-feature dominance.
In this paper, we explore novel approaches to address these issues by utilizing alternate Cross-feature selection, ensemble features and model configuration alteration techniques.
arXiv Detail & Related papers (2023-07-17T13:47:41Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.