Towards Exploratory Reformulation of Constraint Models
- URL: http://arxiv.org/abs/2311.11868v1
- Date: Mon, 20 Nov 2023 16:04:56 GMT
- Title: Towards Exploratory Reformulation of Constraint Models
- Authors: Ian Miguel and Andr\'as Z. Salamon and Christopher Stone
- Abstract summary: We propose a system that explores the space of models through a process of reformulation from an initial model.
We plan to situate this system in a refinement-based approach, where a user writes a constraint specification.
- Score: 0.44658835512168177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is well established that formulating an effective constraint model of a
problem of interest is crucial to the efficiency with which it can subsequently
be solved. Following from the observation that it is difficult, if not
impossible, to know a priori which of a set of candidate models will perform
best in practice, we envisage a system that explores the space of models
through a process of reformulation from an initial model, guided by performance
on a set of training instances from the problem class under consideration. We
plan to situate this system in a refinement-based approach, where a user writes
a constraint specification describing a problem above the level of abstraction
at which many modelling decisions are made. In this position paper we set out
our plan for an exploratory reformulation system, and discuss progress made so
far.
Related papers
- FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction [26.26211464623954]
Federated Importance-Aware Submodel Extraction (FIARSE) is a novel approach that dynamically adjusts submodels based on the importance of model parameters.
Compared to existing works, the proposed method offers a theoretical foundation for the submodel extraction.
Extensive experiments are conducted on various datasets to showcase the superior performance of the proposed FIARSE.
arXiv Detail & Related papers (2024-07-28T04:10:11Z) - Deep Generative Models for Decision-Making and Control [4.238809918521607]
The dual purpose of this thesis is to study the reasons for these shortcomings and to propose solutions for the uncovered problems.
We highlight how inference techniques from the contemporary generative modeling toolbox, including beam search, can be reinterpreted as viable planning strategies for reinforcement learning problems.
arXiv Detail & Related papers (2023-06-15T01:54:30Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Planning with Diffusion for Flexible Behavior Synthesis [125.24438991142573]
We consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem.
The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.
arXiv Detail & Related papers (2022-05-20T07:02:03Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Multiple Plans are Better than One: Diverse Stochastic Planning [26.887796946596243]
In planning problems, it is often challenging to fully model the desired specifications.
In particular, in human-robot interaction, such difficulty may arise due to human's preferences that are either private or complex to model.
We formulate a problem, called diverse planning, that aims to generate a set of representative behaviors that are near-optimal.
arXiv Detail & Related papers (2020-12-31T07:29:11Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - Towards Portfolios of Streamlined Constraint Models: A Case Study with
the Balanced Academic Curriculum Problem [1.8466814193413488]
We focus on the automatic addition of streamliner constraints, derived from the types present in an abstract Essence specification of a problem class of interest.
The refinement of streamlined Essence specifications into constraint models gives rise to a large number of modelling choices.
Various forms of racing are utilised to constrain the computational cost of training.
arXiv Detail & Related papers (2020-09-21T19:48:02Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.