Circuit Representations of Random Forests with Applications to XAI
- URL: http://arxiv.org/abs/2602.08362v1
- Date: Mon, 09 Feb 2026 07:59:51 GMT
- Title: Circuit Representations of Random Forests with Applications to XAI
- Authors: Chunxi Ji, Adnan Darwiche,
- Abstract summary: We present an approach for compiling a random forest classifier into a set of circuits.<n>We show empirically that our proposed approach is significantly more efficient than existing similar approaches.<n>We propose algorithms for computing the robustness of a decision and all shortest ways to flip it.
- Score: 8.207403859762042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We make three contributions in this paper. First, we present an approach for compiling a random forest classifier into a set of circuits, where each circuit directly encodes the instances in some class of the classifier. We show empirically that our proposed approach is significantly more efficient than existing similar approaches. Next, we utilize this approach to further obtain circuits that are tractable for computing the complete and general reasons of a decision, which are instance abstractions that play a fundamental role in computing explanations. Finally, we propose algorithms for computing the robustness of a decision and all shortest ways to flip it. We illustrate the utility of our contributions by using them to enumerate all sufficient reasons, necessary reasons and contrastive explanations of decisions; to compute the robustness of decisions; and to identify all shortest ways to flip the decisions made by random forest classifiers learned from a wide range of datasets.
Related papers
- Formally Explaining Decision Tree Models with Answer Set Programming [4.263043028086137]
We propose a method for generating various types of explanations, namely, sufficient, contrastive, majority, and tree-specific explanations.<n>Compared to SAT-based approaches, our ASP-based method offers greater flexibility in encoding user preferences and supports enumeration of all possible explanations.
arXiv Detail & Related papers (2026-01-07T12:07:45Z) - RLAD: Training LLMs to Discover Abstractions for Solving Reasoning Problems [98.98963933669751]
We train models to be capable of proposing multiple abstractions given a problem, followed by RL that incentivizes building a solution.<n>This results in a two-player RL training paradigm, abbreviated as RLAD, that jointly trains an abstraction generator and a solution generator.<n>We show that allocating more test-time compute to generating abstractions is more beneficial for performance than generating more solutions at large test budgets.
arXiv Detail & Related papers (2025-10-02T17:44:23Z) - Efficient Fairness-Performance Pareto Front Computation [45.088727439928704]
We show that optimal fair representations possess several useful structural properties.<n>We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Breiman meets Bellman: Non-Greedy Decision Trees with MDPs [8.530182510074983]
We present Dynamic Programming Decision Trees (DPDT), a framework that bridges the gap between greedy and optimal approaches.<n>DPDT relies on a Markov Decision Process formulation combined with split generation to construct near-optimal decision trees.<n>Our empirical study shows that DPDT achieves near-optimal loss with orders of magnitude fewer operations than existing optimal solvers.
arXiv Detail & Related papers (2023-09-22T08:18:08Z) - Domain Generalization via Rationale Invariance [70.32415695574555]
This paper offers a new perspective to ease the challenge of domain generalization, which involves maintaining robust results even in unseen environments.
We propose treating the element-wise contributions to the final results as the rationale for making a decision and representing the rationale for each sample as a matrix.
Our experiments demonstrate that the proposed approach achieves competitive results across various datasets, despite its simplicity.
arXiv Detail & Related papers (2023-08-22T03:31:40Z) - Finding Regions of Counterfactual Explanations via Robust Optimization [0.0]
A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the model changes.
Most of the existing methods can only provide one CE, which may not be achievable for the user.
We derive an iterative method to calculate robust CEs that remain valid even after the features are slightly perturbed.
arXiv Detail & Related papers (2023-01-26T14:06:26Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Simple is better: Making Decision Trees faster using random sampling [4.284674689172996]
In recent years, gradient boosted decision trees have become popular in building robust machine learning models on big data.
We show theoretically and empirically that choosing the split points uniformly at random provides the same or even better performance in terms of accuracy and computational efficiency.
arXiv Detail & Related papers (2021-08-19T17:00:21Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - On The Reasons Behind Decisions [11.358487655918676]
We define notions such as sufficient, necessary and complete reasons behind decisions.
We show how these notions can be used to evaluate counterfactual statements.
We present efficient algorithms for computing these notions.
arXiv Detail & Related papers (2020-02-21T13:37:29Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.