Partial Queries for Constraint Acquisition
- URL: http://arxiv.org/abs/2003.06649v2
- Date: Tue, 12 Oct 2021 09:41:15 GMT
- Title: Partial Queries for Constraint Acquisition
- Authors: Christian Bessiere, Clement Carbonnel, Anton Dries, Emmanuel Hebrard,
George Katsirelos, Nadjib Lazaar, Nina Narodytska, Claude-Guy Quimper, Kostas
Stergiou, Dimosthenis C. Tsouros, Toby Walsh
- Abstract summary: We learn constraint networks by asking the user partial queries.
That is, we ask the user to classify assignments to subsets of the variables as positive or negative.
We provide an algorithm, called QUACQ, that, given a negative example, focuses onto a constraint of the target network in a number of queries logarithmic in the size of the example.
- Score: 40.280429279882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning constraint networks is known to require a number of membership
queries exponential in the number of variables. In this paper, we learn
constraint networks by asking the user partial queries. That is, we ask the
user to classify assignments to subsets of the variables as positive or
negative. We provide an algorithm, called QUACQ, that, given a negative
example, focuses onto a constraint of the target network in a number of queries
logarithmic in the size of the example. The whole constraint network can then
be learned with a polynomial number of partial queries. We give information
theoretic lower bounds for learning some simple classes of constraint networks
and show that our generic algorithm is optimal in some cases.
Related papers
- Active Learning with Simple Questions [20.239213248652376]
We consider an active learning setting where a learner is presented with a pool S of n unlabeled examples belonging to a domain X.
We study more general region queries that allow the learner to pick a subset of the domain T subset X and a target label y.
We show that given any hypothesis class H with VC dimension d, one can design a region query family Q with VC dimension O(d) such that for every set of n examples S subset X and every h* in H, a learner can submit O(d log n) queries from Q to a
arXiv Detail & Related papers (2024-05-13T17:13:32Z) - Learning to Learn in Interactive Constraint Acquisition [7.741303298648302]
In Constraint Acquisition (CA), the goal is to assist the user by automatically learning the model.
In (inter)active CA, this is done by interactively posting queries to the user.
We propose to use probabilistic classification models to guide interactive CA to generate more promising queries.
arXiv Detail & Related papers (2023-12-17T19:12:33Z) - Guided Bottom-Up Interactive Constraint Acquisition [10.552990258277434]
Constraint Acquisition (CA) systems can be used to assist in the modeling of constraint satisfaction problems.
Current interactive CA algorithms suffer from at least two major bottlenecks.
We present two novel methods that improve the efficiency of CA.
arXiv Detail & Related papers (2023-07-12T12:25:37Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - Efficient Multiple Constraint Acquisition [1.3706331473063877]
Constraint acquisition systems such as QuAcq and MultiAcq can assist non-expert users to model their problems as constraint networks.
We present a technique that boosts the performance of constraint acquisition by reducing the number of queries significantly.
We then turn our attention to query generation which is a significant but rather overlooked part of the acquisition process.
arXiv Detail & Related papers (2021-09-13T12:42:16Z) - A simple geometric proof for the benefit of depth in ReLU networks [57.815699322370826]
We present a simple proof for the benefit of depth in multi-layer feedforward network with rectified activation ("depth separation")
We present a concrete neural network with linear depth (in $m$) and small constant width ($leq 4$) that classifies the problem with zero error.
arXiv Detail & Related papers (2021-01-18T15:40:27Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.