Trustworthy and Explainable Decision-Making for Workforce allocation
- URL: http://arxiv.org/abs/2412.10272v1
- Date: Fri, 13 Dec 2024 16:46:13 GMT
- Title: Trustworthy and Explainable Decision-Making for Workforce allocation
- Authors: Guillaume Povéda, Ryma Boumazouza, Andreas Strahl, Mark Hall, Santiago Quintana-Amate, Nahum Alvarez, Ignace Bleukx, Dimos Tsouros, Hélène Verhaeghe, Tias Guns,
- Abstract summary: This paper presents an ongoing project focused on developing a decision-making tool designed for workforce allocation.
Our objective is to create a system that not only optimises the allocation of teams to scheduled tasks but also provides clear, understandable explanations for its decisions.
By incorporating human-in-the-loop mechanisms, the tool aims to enhance user trust and facilitate interactive conflict resolution.
- Score: 5.329471304736775
- License:
- Abstract: In industrial contexts, effective workforce allocation is crucial for operational efficiency. This paper presents an ongoing project focused on developing a decision-making tool designed for workforce allocation, emphasising the explainability to enhance its trustworthiness. Our objective is to create a system that not only optimises the allocation of teams to scheduled tasks but also provides clear, understandable explanations for its decisions, particularly in cases where the problem is infeasible. By incorporating human-in-the-loop mechanisms, the tool aims to enhance user trust and facilitate interactive conflict resolution. We implemented our approach on a prototype tool/digital demonstrator intended to be evaluated on a real industrial scenario both in terms of performance and user acceptability.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Decision-Aware Predictive Model Selection for Workforce Allocation [0.27309692684728615]
We introduce a novel framework that utilizes machine learning to predict worker behavior.
In our approach, the optimal predictive model used to represent a worker's behavior is determined by how that worker is allocated.
We present a decision-aware optimization framework that integrates predictive model selection with worker allocation.
arXiv Detail & Related papers (2024-10-10T13:59:43Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - Personalized Decision Supports based on Theory of Mind Modeling and
Explainable Reinforcement Learning [0.9071985476473737]
We propose a novel personalized decision support system that combines Theory of Mind (ToM) modeling and explainable Reinforcement Learning (XRL)
Our proposed system generates accurate and personalized interventions that are easily interpretable by end-users.
arXiv Detail & Related papers (2023-12-13T00:37:17Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - Explainability's Gain is Optimality's Loss? -- How Explanations Bias
Decision-making [0.0]
Explanations help to facilitate communication between the algorithm and the human decision-maker.
Feature-based explanations' semantics of causal models induce leakage from the decision-maker's prior beliefs.
Such differences can lead to sub-optimal and biased decision outcomes.
arXiv Detail & Related papers (2022-06-17T11:43:42Z) - A Factor-Based Framework for Decision-Making Competency Self-Assessment [1.3670071336891754]
We develop a framework for generating succinct human-understandable competency self-assessments in terms of machine self confidence.
We combine several aspects of probabilistic meta reasoning for algorithmic planning and decision-making under uncertainty to arrive at a novel set of generalizable self-confidence factors.
arXiv Detail & Related papers (2022-03-22T18:19:10Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Outcome-Driven Reinforcement Learning via Variational Inference [95.82770132618862]
We discuss a new perspective on reinforcement learning, recasting it as the problem of inferring actions that achieve desired outcomes, rather than a problem of maximizing rewards.
To solve the resulting outcome-directed inference problem, we establish a novel variational inference formulation that allows us to derive a well-shaped reward function.
We empirically demonstrate that this method eliminates the need to design reward functions and leads to effective goal-directed behaviors.
arXiv Detail & Related papers (2021-04-20T18:16:21Z) - Tradeoff-Focused Contrastive Explanation for MDP Planning [7.929642367937801]
In real-world applications of planning, planning agents' decisions can involve complex tradeoffs among competing objectives.
It can be difficult for the end-users to understand why an agent decides on a particular planning solution on the basis of its objective values.
We propose an approach, based on contrastive explanation, that enables a multi-objective MDP planning agent to explain its decisions in a way that communicates its tradeoff rationale.
arXiv Detail & Related papers (2020-04-27T17:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.