Learning to Learn in Interactive Constraint Acquisition
- URL: http://arxiv.org/abs/2312.10795v1
- Date: Sun, 17 Dec 2023 19:12:33 GMT
- Title: Learning to Learn in Interactive Constraint Acquisition
- Authors: Dimos Tsouros, Senne Berden, Tias Guns
- Abstract summary: In Constraint Acquisition (CA), the goal is to assist the user by automatically learning the model.
In (inter)active CA, this is done by interactively posting queries to the user.
We propose to use probabilistic classification models to guide interactive CA to generate more promising queries.
- Score: 7.741303298648302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Constraint Programming (CP) has been successfully used to model and solve
complex combinatorial problems. However, modeling is often not trivial and
requires expertise, which is a bottleneck to wider adoption. In Constraint
Acquisition (CA), the goal is to assist the user by automatically learning the
model. In (inter)active CA, this is done by interactively posting queries to
the user, e.g., asking whether a partial solution satisfies their (unspecified)
constraints or not. While interac tive CA methods learn the constraints, the
learning is related to symbolic concept learning, as the goal is to learn an
exact representation. However, a large number of queries is still required to
learn the model, which is a major limitation. In this paper, we aim to
alleviate this limitation by tightening the connection of CA and Machine
Learning (ML), by, for the first time in interactive CA, exploiting statistical
ML methods. We propose to use probabilistic classification models to guide
interactive CA to generate more promising queries. We discuss how to train
classifiers to predict whether a candidate expression from the bias is a
constraint of the problem or not, using both relation-based and scope-based
features. We then show how the predictions can be used in all layers of
interactive CA: the query generation, the scope finding, and the lowest-level
constraint finding. We experimentally evaluate our proposed methods using
different classifiers and show that our methods greatly outperform the state of
the art, decreasing the number of queries needed to converge by up to 72%.
Related papers
- Automatic Feature Learning for Essence: a Case Study on Car Sequencing [1.006631010704608]
We consider the task of building machine learning models to automatically select the best combination for a problem instance.
A critical part of the learning process is to define instance features, which serve as input to the selection model.
Our contribution is automatic learning of instance features directly from the high-level representation of a problem instance using a language model.
arXiv Detail & Related papers (2024-09-23T16:06:44Z) - STAND: Data-Efficient and Self-Aware Precondition Induction for Interactive Task Learning [0.0]
STAND is a data-efficient and computationally efficient machine learning approach.
It produces better classification accuracy than popular approaches like XGBoost.
It produces a measure called instance certainty that can predict increases in holdout set performance.
arXiv Detail & Related papers (2024-09-11T22:49:38Z) - SQLNet: Scale-Modulated Query and Localization Network for Few-Shot
Class-Agnostic Counting [71.38754976584009]
The class-agnostic counting (CAC) task has recently been proposed to solve the problem of counting all objects of an arbitrary class with several exemplars given in the input image.
We propose a novel localization-based CAC approach, termed Scale-modulated Query and Localization Network (Net)
It fully explores the scales of exemplars in both the query and localization stages and achieves effective counting by accurately locating each object and predicting its approximate size.
arXiv Detail & Related papers (2023-11-16T16:50:56Z) - Cache & Distil: Optimising API Calls to Large Language Models [82.32065572907125]
Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries.
To curtail the frequency of these calls, one can employ a smaller language model -- a student.
This student gradually gains proficiency in independently handling an increasing number of user requests.
arXiv Detail & Related papers (2023-10-20T15:01:55Z) - Guided Bottom-Up Interactive Constraint Acquisition [10.552990258277434]
Constraint Acquisition (CA) systems can be used to assist in the modeling of constraint satisfaction problems.
Current interactive CA algorithms suffer from at least two major bottlenecks.
We present two novel methods that improve the efficiency of CA.
arXiv Detail & Related papers (2023-07-12T12:25:37Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - On data-driven chance constraint learning for mixed-integer optimization
problems [0.0]
We develop a Chance Constraint Learning (CCL) methodology with a focus on mixed-integer linear optimization problems.
CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables.
An open-access software has been developed to be used by practitioners.
arXiv Detail & Related papers (2022-07-08T11:54:39Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.