PBRE: A Rule Extraction Method from Trained Neural Networks Designed for
Smart Home Services
- URL: http://arxiv.org/abs/2207.08814v1
- Date: Mon, 18 Jul 2022 05:19:24 GMT
- Title: PBRE: A Rule Extraction Method from Trained Neural Networks Designed for
Smart Home Services
- Authors: Mingming Qiu, Elie Najm, Remi Sharrock, Bruno Traverson
- Abstract summary: PBRE is proposed to extract rules from learning methods to realize dynamic rule generation for smart home systems.
We also apply PBRE to extract rules from a smart home service represented by an NRL (Neural Network-based Reinforcement Learning)
- Score: 2.599882743586164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing smart home services is a complex task when multiple services with a
large number of sensors and actuators are deployed simultaneously. It may rely
on knowledge-based or data-driven approaches. The former can use rule-based
methods to design services statically, and the latter can use learning methods
to discover inhabitants' preferences dynamically. However, neither of these
approaches is entirely satisfactory because rules cannot cover all possible
situations that may change, and learning methods may make decisions that are
sometimes incomprehensible to the inhabitant. In this paper, PBRE (Pedagogic
Based Rule Extractor) is proposed to extract rules from learning methods to
realize dynamic rule generation for smart home systems. The expected advantage
is that both the explainability of rule-based methods and the dynamicity of
learning methods are adopted. We compare PBRE with an existing rule extraction
method, and the results show better performance of PBRE. We also apply PBRE to
extract rules from a smart home service represented by an NRL (Neural
Network-based Reinforcement Learning). The results show that PBRE can help the
NRL-simulated service to make understandable suggestions to the inhabitant.
Related papers
- Online inductive learning from answer sets for efficient reinforcement learning exploration [52.03682298194168]
We exploit inductive learning of answer set programs to learn a set of logical rules representing an explainable approximation of the agent policy.
We then perform answer set reasoning on the learned rules to guide the exploration of the learning agent at the next batch.
Our methodology produces a significant boost in the discounted return achieved by the agent, even in the first batches of training.
arXiv Detail & Related papers (2025-01-13T16:13:22Z) - Upside-Down Reinforcement Learning for More Interpretable Optimal Control [2.06242362470764]
We investigate whether function approximation algorithms other than Neural Networks (NNs) can also be used within a Upside-Down Reinforcement Learning framework.
Our experiments, performed over several popular optimal control benchmarks, show that tree-based methods like Random Forests and Extremely Randomized Trees can perform just as well as NNs.
arXiv Detail & Related papers (2024-11-18T10:44:20Z) - Multi-Type Preference Learning: Empowering Preference-Based Reinforcement Learning with Equal Preferences [12.775486996512434]
Preference-Based reinforcement learning learns directly from the preferences of human teachers regarding agent behaviors.
Existing PBRL methods often learn from explicit preferences, neglecting the possibility that teachers may choose equal preferences.
We propose a novel PBRL method, Multi-Type Preference Learning (MTPL), which allows simultaneous learning from equal preferences while leveraging existing methods for learning from explicit preferences.
arXiv Detail & Related papers (2024-09-11T13:43:49Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - Efficient Dependency Analysis for Rule-Based Ontologies [0.2752817022620644]
dependencies have been proposed for static analysis of existential rule properties.
We focus on two kinds of rule dependencies -- positive reliances and restraints.
We implement optimised algorithms for their efficient computation.
arXiv Detail & Related papers (2022-07-20T05:53:36Z) - IQ-Learn: Inverse soft-Q Learning for Imitation [95.06031307730245]
imitation learning from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics.
Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence.
We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function.
arXiv Detail & Related papers (2021-06-23T03:43:10Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z) - Guided Dialog Policy Learning without Adversarial Learning in the Loop [103.20723982440788]
A number of adversarial learning methods have been proposed to learn the reward function together with the dialogue policy.
We propose to decompose the adversarial training into two steps.
First, we train the discriminator with an auxiliary dialogue generator and then incorporate a derived reward model into a common RL method to guide the dialogue policy learning.
arXiv Detail & Related papers (2020-04-07T11:03:17Z) - Reward-Conditioned Policies [100.64167842905069]
imitation learning requires near-optimal expert data.
Can we learn effective policies via supervised learning without demonstrations?
We show how such an approach can be derived as a principled method for policy search.
arXiv Detail & Related papers (2019-12-31T18:07:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.