Autoencoders for strategic decision support
- URL: http://arxiv.org/abs/2005.01075v1
- Date: Sun, 3 May 2020 12:54:06 GMT
- Title: Autoencoders for strategic decision support
- Authors: Sam Verboven, Jeroen Berrevoets, Chris Wuytens, Bart Baesens, Wouter
Verbeke
- Abstract summary: We introduce and extend the use of autoencoders to provide strategically relevant granular feedback.
A first experiment indicates that experts are inconsistent in their decision making, highlighting the need for strategic decision support.
Our study confirms several principal weaknesses of human decision-making and stresses the importance of synergy between a model and humans.
- Score: 5.922780668675565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the majority of executive domains, a notion of normality is involved in
most strategic decisions. However, few data-driven tools that support strategic
decision-making are available. We introduce and extend the use of autoencoders
to provide strategically relevant granular feedback. A first experiment
indicates that experts are inconsistent in their decision making, highlighting
the need for strategic decision support. Furthermore, using two large
industry-provided human resources datasets, the proposed solution is evaluated
in terms of ranking accuracy, synergy with human experts, and dimension-level
feedback. This three-point scheme is validated using (a) synthetic data, (b)
the perspective of data quality, (c) blind expert validation, and (d)
transparent expert evaluation. Our study confirms several principal weaknesses
of human decision-making and stresses the importance of synergy between a model
and humans. Moreover, unsupervised learning and in particular the autoencoder
are shown to be valuable tools for strategic decision-making.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - PresAIse, A Prescriptive AI Solution for Enterprises [6.523929486550928]
This paper outlines an initiative from IBM Research, aiming to address some of these challenges by offering a suite of prescriptive AI solutions.
The solution suite includes scalable causal inference methods, interpretable decision-making approaches, and the integration of large language models.
A proof-of-concept, PresAIse, demonstrates the solutions' potential by enabling non-ML experts to interact with prescriptive AI models via a natural language interface.
arXiv Detail & Related papers (2024-02-03T03:23:08Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Learning Personalized Decision Support Policies [56.949897454209186]
$texttModiste$ is an interactive tool to learn personalized decision support policies.
We find that personalized policies outperform offline policies, and, in the cost-aware setting, reduce the incurred cost with minimal degradation to performance.
arXiv Detail & Related papers (2023-04-13T17:53:34Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - Algorithmic Decision-Making Safeguarded by Human Knowledge [8.482569811904028]
We study the augmentation of algorithmic decisions with human knowledge.
We show that when the algorithmic decision is optimal with large data, the non-data-driven human guardrail usually provides no benefit.
In these cases, even with sufficient data, the augmentation from human knowledge can still improve the performance of the algorithmic decision.
arXiv Detail & Related papers (2022-11-20T17:13:32Z) - A Human-Centric Perspective on Fairness and Transparency in Algorithmic
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
Non-transparent systems are prone to yield unfair outcomes because their sanity is challenging to assess and calibrate.
I aim to make the following three main contributions through my doctoral thesis.
arXiv Detail & Related papers (2022-04-29T18:31:04Z) - A Machine Learning Framework Towards Transparency in Experts' Decision
Quality [0.0]
In many important settings, transparency in experts' decision quality is rarely possible because ground truth data for evaluating the experts' decisions is costly and available only for a limited set of decisions.
We first formulate the problem of estimating experts' decision accuracy in this setting and then develop a machine-learning-based framework to address it.
Our method effectively leverages both abundant historical data on workers' past decisions, and scarce decision instances with ground truth information.
arXiv Detail & Related papers (2021-10-21T18:50:40Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.