Prescriptive Machine Learning for Automated Decision Making: Challenges
and Opportunities
- URL: http://arxiv.org/abs/2112.08268v1
- Date: Wed, 15 Dec 2021 17:02:07 GMT
- Title: Prescriptive Machine Learning for Automated Decision Making: Challenges
and Opportunities
- Authors: Eyke H\"ullermeier
- Abstract summary: Recent applications of machine learning (ML) reveal a noticeable shift from its use for predictive modeling to its use for prescriptive modeling.
prescriptive modeling comes with new technical conditions for learning and new demands regarding reliability, responsibility, and the ethics of decision making.
To support the data-driven design of decision-making agents that act in a rational but at the same time responsible manner, a rigorous methodological foundation of prescriptive ML is needed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent applications of machine learning (ML) reveal a noticeable shift from
its use for predictive modeling in the sense of a data-driven construction of
models mainly used for the purpose of prediction (of ground-truth facts) to its
use for prescriptive modeling. What is meant by this is the task of learning a
model that stipulates appropriate decisions about the right course of action in
real-world scenarios: Which medical therapy should be applied? Should this
person be hired for the job? As argued in this article, prescriptive modeling
comes with new technical conditions for learning and new demands regarding
reliability, responsibility, and the ethics of decision making. Therefore, to
support the data-driven design of decision-making agents that act in a rational
but at the same time responsible manner, a rigorous methodological foundation
of prescriptive ML is needed. The purpose of this short paper is to elaborate
on specific characteristics of prescriptive ML and to highlight some key
challenges it implies. Besides, drawing connections to other branches of
contemporary AI research, the grounding of prescriptive ML in a (generalized)
decision-theoretic framework is advocated.
Related papers
- Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Fairness Implications of Heterogeneous Treatment Effect Estimation with
Machine Learning Methods in Policy-making [0.0]
We argue that standard AI Fairness approaches for predictive machine learning are not suitable for all causal machine learning applications.
We argue that policy-making is best seen as a joint decision where the causal machine learning model usually only has indirect power.
arXiv Detail & Related papers (2023-09-02T03:06:14Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning [2.322461721824713]
We propose HEX, a human-in-the-loop deep reinforcement learning approach to machine learning explainability (MLX)
Our formulation explicitly considers the decision boundary of the ML model in question, rather than the underlying training data.
Our proposed methods thus synthesize HITL MLX policies that explicitly capture the decision boundary of the model in question for use in limited data scenarios.
arXiv Detail & Related papers (2022-06-02T23:53:40Z) - Using Shape Metrics to Describe 2D Data Points [0.0]
We propose to use shape metrics to describe 2D data to help make analyses more explainable and interpretable.
This is particularly important in applications in the medical community where the right to explainability' is crucial.
arXiv Detail & Related papers (2022-01-27T23:28:42Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.