SCARI: Separate and Conquer Algorithm for Action Rules and
Recommendations Induction
- URL: http://arxiv.org/abs/2106.05348v1
- Date: Wed, 9 Jun 2021 19:27:30 GMT
- Title: SCARI: Separate and Conquer Algorithm for Action Rules and
Recommendations Induction
- Authors: Marek Sikora (1), Pawe{\l} Matyszok (1), {\L}ukasz Wr\'obel (1)((1)
Faculty of Automatic Control, Electronics and Computer Science, Silesian
University of Technology, Akademicka 16, 44-100 Gliwice, Poland)
- Abstract summary: This article describes an action rule induction algorithm based on a sequential covering approach.
The algorithm allows the action rule induction from a source and a target decision class point of view.
The application of rule quality measures enables the induction of action rules that meet various quality criteria.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article describes an action rule induction algorithm based on a
sequential covering approach. Two variants of the algorithm are presented. The
algorithm allows the action rule induction from a source and a target decision
class point of view. The application of rule quality measures enables the
induction of action rules that meet various quality criteria. The article also
presents a method for recommendation induction. The recommendations indicate
the actions to be taken to move a given test example, representing the source
class, to the target one. The recommendation method is based on a set of
induced action rules. The experimental part of the article presents the results
of the algorithm operation on sixteen data sets. As a result of the conducted
research the Ac-Rules package was made available.
Related papers
- Learning Decision Trees and Forests with Algorithmic Recourse [11.401006371457436]
Algorithmic Recourse (AR) aims to provide a recourse action for altering the undesired prediction result given by a model.
We formulate the task of learning an accurate classification tree under the constraint of ensuring the existence of reasonable actions for as many instances as possible.
arXiv Detail & Related papers (2024-06-03T08:33:42Z) - Code Models are Zero-shot Precondition Reasoners [83.8561159080672]
We use code representations to reason about action preconditions for sequential decision making tasks.
We propose a precondition-aware action sampling strategy that ensures actions predicted by a policy are consistent with preconditions.
arXiv Detail & Related papers (2023-11-16T06:19:27Z) - A Voting Approach for Explainable Classification with Rule Learning [0.0]
We introduce a voting approach combining both worlds, aiming to achieve comparable results as (unexplainable) state-of-the-art methods.
We prove that our approach not only clearly outperforms ordinary rule learning methods, but also yields results on a par with state-of-the-art outcomes.
arXiv Detail & Related papers (2023-11-13T13:22:21Z) - Towards Target Sequential Rules [52.4562332499155]
We propose an efficient algorithm, called targeted sequential rule mining (TaSRM)
It is shown that the novel algorithm TaSRM and its variants can achieve better experimental performance compared to the existing baseline algorithm.
arXiv Detail & Related papers (2022-06-09T18:59:54Z) - Provable Benefits of Actor-Critic Methods for Offline Reinforcement
Learning [85.50033812217254]
Actor-critic methods are widely used in offline reinforcement learning practice, but are not so well-understood theoretically.
We propose a new offline actor-critic algorithm that naturally incorporates the pessimism principle.
arXiv Detail & Related papers (2021-08-19T17:27:29Z) - Attribute reduction and rule acquisition of formal decision context
based on two new kinds of decision rules [1.0914300987810128]
The premises of I-decision rules and II-decision rules are object-oriented concepts.
The attribute reduction approaches to preserve I-decision rules and II-decision rules are presented.
arXiv Detail & Related papers (2021-07-04T02:55:24Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - Average-Reward Off-Policy Policy Evaluation with Function Approximation [66.67075551933438]
We consider off-policy policy evaluation with function approximation in average-reward MDPs.
bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad.
We propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting.
arXiv Detail & Related papers (2021-01-08T00:43:04Z) - Solving the scalarization issues of Advantage-based Reinforcement
Learning Algorithms [2.400834442447969]
Some of the issues that arise from the scalarization of the multi-objective optimization problem in the Advantage Actor Critic (A2C) reinforcement learning algorithm are investigated.
The paper shows how a naive scalarization can lead to gradients overlapping.
The possibility that the entropy regularization term can be a source of uncontrolled noise is discussed.
arXiv Detail & Related papers (2020-04-08T17:03:21Z) - Adaptive Stopping Rule for Kernel-based Gradient Descent Algorithms [27.002742106701863]
We propose an adaptive stopping rule for kernel-based gradient descent algorithms.
We analyze the performance of the adaptive stopping rule in the framework of learning theory.
arXiv Detail & Related papers (2020-01-09T08:12:38Z) - Hierarchical Variational Imitation Learning of Control Programs [131.7671843857375]
We propose a variational inference method for imitation learning of a control policy represented by parametrized hierarchical procedures (PHP)
Our method discovers the hierarchical structure in a dataset of observation-action traces of teacher demonstrations, by learning an approximate posterior distribution over the latent sequence of procedure calls and terminations.
We demonstrate a novel benefit of variational inference in the context of hierarchical imitation learning: in decomposing the policy into simpler procedures, inference can leverage acausal information that is unused by other methods.
arXiv Detail & Related papers (2019-12-29T08:57:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.