Beyond the Single-Best Model: Rashomon Partial Dependence Profile for Trustworthy Explanations in AutoML
- URL: http://arxiv.org/abs/2507.14744v1
- Date: Sat, 19 Jul 2025 20:30:52 GMT
- Title: Beyond the Single-Best Model: Rashomon Partial Dependence Profile for Trustworthy Explanations in AutoML
- Authors: Mustafa Cavus, Jan N. van Rijn, Przemysław Biecek,
- Abstract summary: We propose a framework that incorporates model multiplicity into explanation generation.<n>The resulting Rashomon PDP captures interpretive variability and highlights areas of disagreement.<n>Our findings suggest that Rashomon PDP improves the reliability and trustworthiness of model interpretations.
- Score: 4.14197005718384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated machine learning systems efficiently streamline model selection but often focus on a single best-performing model, overlooking explanation uncertainty, an essential concern in human centered explainable AI. To address this, we propose a novel framework that incorporates model multiplicity into explanation generation by aggregating partial dependence profiles (PDP) from a set of near optimal models, known as the Rashomon set. The resulting Rashomon PDP captures interpretive variability and highlights areas of disagreement, providing users with a richer, uncertainty aware view of feature effects. To evaluate its usefulness, we introduce two quantitative metrics, the coverage rate and the mean width of confidence intervals, to evaluate the consistency between the standard PDP and the proposed Rashomon PDP. Experiments on 35 regression datasets from the OpenML CTR23 benchmark suite show that in most cases, the Rashomon PDP covers less than 70% of the best model's PDP, underscoring the limitations of single model explanations. Our findings suggest that Rashomon PDP improves the reliability and trustworthiness of model interpretations by adding additional information that would otherwise be neglected. This is particularly useful in high stakes domains where transparency and confidence are critical.
Related papers
- On Rollouts in Model-Based Reinforcement Learning [5.004576576202551]
Model-based reinforcement learning (MBRL) seeks to enhance data efficiency by learning a model of the environment and generating synthetic rollouts from it.<n> accumulated model errors during these rollouts can distort the data distribution, negatively impacting policy learning and hindering long-term planning.<n>We propose Infoprop, a model-based rollout mechanism that separates aleatoric from model uncertainty and reduces the influence of the latter on the data distribution.
arXiv Detail & Related papers (2025-01-28T13:02:52Z) - Linguistic Fuzzy Information Evolution with Random Leader Election Mechanism for Decision-Making Systems [58.67035332062508]
Linguistic fuzzy information evolution is crucial in understanding information exchange among agents.
Different agent weights may lead to different convergence results in the classic DeGroot model.
This paper proposes three new models of linguistic fuzzy information dynamics.
arXiv Detail & Related papers (2024-10-19T18:15:24Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - PSLF: A PID Controller-incorporated Second-order Latent Factor Analysis Model for Recommender System [11.650076383080526]
A second-order-based HDI model (SLF) analysis demonstrates superior performance in graph learning, particularly for high- and incomplete factor data rates.
arXiv Detail & Related papers (2024-08-31T13:01:58Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - FairDP: Certified Fairness with Differential Privacy [55.51579601325759]
This paper introduces FairDP, a novel training mechanism designed to provide group fairness certification for the trained model's decisions.<n>The key idea of FairDP is to train models for distinct individual groups independently, add noise to each group's gradient for data privacy protection, and integrate knowledge from group models to formulate a model that balances privacy, utility, and fairness in downstream tasks.
arXiv Detail & Related papers (2023-05-25T21:07:20Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model [18.537838366377915]
ProtoVAE is a variational autoencoder-based framework that learns class-specific prototypes in an end-to-end manner.
It enforces trustworthiness and diversity by regularizing the representation space and introducing an orthonormality constraint.
arXiv Detail & Related papers (2022-10-15T00:42:13Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - Harnessing Perceptual Adversarial Patches for Crowd Counting [92.79051296850405]
Crowd counting is vulnerable to adversarial examples in the physical world.
This paper proposes the Perceptual Adrial Patch (PAP) generation framework to learn the shared perceptual features between models.
arXiv Detail & Related papers (2021-09-16T13:51:39Z) - Explaining a Series of Models by Propagating Local Feature Attributions [9.66840768820136]
Pipelines involving several machine learning models improve performance in many domains but are difficult to understand.
We introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value.
Our framework enables us to draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction.
arXiv Detail & Related papers (2021-04-30T22:20:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.