CoreDiag: Eliminating Redundancy in Constraint Sets
- URL: http://arxiv.org/abs/2102.12151v1
- Date: Wed, 24 Feb 2021 09:16:10 GMT
- Title: CoreDiag: Eliminating Redundancy in Constraint Sets
- Authors: Alexander Felfernig and Christoph Zehentner and Paul Blazek
- Abstract summary: We present a new algorithm which can be exploited for the determination of minimal cores (minimal non-redundant constraint sets)
The algorithm is especially useful for distributed knowledge engineering scenarios where the degree of redundancy can become high.
In order to show the applicability of our approach, we present an empirical study conducted with commercial configuration knowledge bases.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constraint-based environments such as configuration systems, recommender
systems, and scheduling systems support users in different decision making
scenarios. These environments exploit a knowledge base for determining
solutions of interest for the user. The development and maintenance of such
knowledge bases is an extremely time-consuming and error-prone task. Users
often specify constraints which do not reflect the real-world. For example,
redundant constraints are specified which often increase both, the effort for
calculating a solution and efforts related to knowledge base development and
maintenance. In this paper we present a new algorithm (CoreDiag) which can be
exploited for the determination of minimal cores (minimal non-redundant
constraint sets). The algorithm is especially useful for distributed knowledge
engineering scenarios where the degree of redundancy can become high. In order
to show the applicability of our approach, we present an empirical study
conducted with commercial configuration knowledge bases.
Related papers
- Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Conjunctive Query Based Constraint Solving For Feature Model
Configuration [79.14348940034351]
We show how to apply conjunctive queries to solve constraint satisfaction problems.
This approach allows the application of a wide-spread database technology to solve configuration tasks.
arXiv Detail & Related papers (2023-04-26T10:08:07Z) - Provable Reinforcement Learning with a Short-Term Memory [68.00677878812908]
We study a new subclass of POMDPs, whose latent states can be decoded by the most recent history of a short length $m$.
In particular, in the rich-observation setting, we develop new algorithms using a novel "moment matching" approach with a sample complexity that scales exponentially.
Our results show that a short-term memory suffices for reinforcement learning in these environments.
arXiv Detail & Related papers (2022-02-08T16:39:57Z) - MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural
Networks [21.150810790468608]
We propose a novel way to incorporate expert knowledge into the training of deep neural networks.
Many approaches encode domain constraints directly into the network architecture, requiring non-trivial or domain-specific engineering.
Our approach, called MultiplexNet, represents domain knowledge as a logical formula in disjunctive normal form (DNF) which is easy to encode and to elicit from human experts.
arXiv Detail & Related papers (2021-11-02T12:39:21Z) - Adaptive Discretization in Online Reinforcement Learning [9.560980936110234]
Two major questions in designing discretization-based algorithms are how to create the discretization and when to refine it.
We provide a unified theoretical analysis of tree-based hierarchical partitioning methods for online reinforcement learning.
Our algorithms are easily adapted to operating constraints, and our theory provides explicit bounds across each of the three facets.
arXiv Detail & Related papers (2021-10-29T15:06:15Z) - An Efficient Diagnosis Algorithm for Inconsistent Constraint Sets [68.8204255655161]
We introduce a divide-and-conquer based diagnosis algorithm (FastDiag) which identifies minimal sets of faulty constraints in an over-constrained problem.
We compare FastDiag with the conflict-directed calculation of hitting sets and present an in-depth performance analysis.
arXiv Detail & Related papers (2021-02-17T19:55:42Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.