Automating Reformulation of Essence Specifications via Graph Rewriting
- URL: http://arxiv.org/abs/2411.09576v1
- Date: Thu, 14 Nov 2024 16:35:15 GMT
- Title: Automating Reformulation of Essence Specifications via Graph Rewriting
- Authors: Ian Miguel, AndrĂ¡s Z. Salamon, Christopher Stone,
- Abstract summary: This paper presents a system that employs graph rewriting to reformulate an input model for improved performance automatically.
We implement our system via rewrite rules expressed in the Graph Programs 2 language.
We show how to automatically translate the solution of the reformulated problem into a solution of the original problem for verification and presentation.
- Score: 0.47928510661698703
- License:
- Abstract: Formulating an effective constraint model of a parameterised problem class is crucial to the efficiency with which instances of the class can subsequently be solved. It is difficult to know beforehand which of a set of candidate models will perform best in practice. This paper presents a system that employs graph rewriting to reformulate an input model for improved performance automatically. By situating our work in the Essence abstract constraint specification language, we can use the structure in its high level variable types to trigger rewrites directly. We implement our system via rewrite rules expressed in the Graph Programs 2 language, applied to the abstract syntax tree of an input specification. We show how to automatically translate the solution of the reformulated problem into a solution of the original problem for verification and presentation. We demonstrate the efficacy of our system with a detailed case study.
Related papers
- Likelihood as a Performance Gauge for Retrieval-Augmented Generation [78.28197013467157]
We show that likelihoods serve as an effective gauge for language model performance.
We propose two methods that use question likelihood as a gauge for selecting and constructing prompts that lead to better performance.
arXiv Detail & Related papers (2024-11-12T13:14:09Z) - Automatic Feature Learning for Essence: a Case Study on Car Sequencing [1.006631010704608]
We consider the task of building machine learning models to automatically select the best combination for a problem instance.
A critical part of the learning process is to define instance features, which serve as input to the selection model.
Our contribution is automatic learning of instance features directly from the high-level representation of a problem instance using a language model.
arXiv Detail & Related papers (2024-09-23T16:06:44Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance [62.15866177242207]
We show that through constructing a subject-agnostic condition, one could obtain outputs consistent with both the given subject and input text prompts.
Our approach is conceptually simple and requires only minimal code modifications, but leads to substantial quality improvements.
arXiv Detail & Related papers (2024-05-02T15:03:41Z) - Reinforcement Learning for Graph Coloring: Understanding the Power and
Limits of Non-Label Invariant Representations [0.0]
We will show that a Proximal Policy Optimization model can learn to solve the graph coloring problem.
We will also show that the labeling of a graph is critical to the performance of the model by taking the matrix representation of a graph and permuting it.
arXiv Detail & Related papers (2024-01-23T03:43:34Z) - Towards Exploratory Reformulation of Constraint Models [0.44658835512168177]
We propose a system that explores the space of models through a process of reformulation from an initial model.
We plan to situate this system in a refinement-based approach, where a user writes a constraint specification.
arXiv Detail & Related papers (2023-11-20T16:04:56Z) - Answer Candidate Type Selection: Text-to-Text Language Model for Closed
Book Question Answering Meets Knowledge Graphs [62.20354845651949]
We present a novel approach which works on top of the pre-trained Text-to-Text QA system to address this issue.
Our simple yet effective method performs filtering and re-ranking of generated candidates based on their types derived from Wikidata "instance_of" property.
arXiv Detail & Related papers (2023-10-10T20:49:43Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Heterogeneous Line Graph Transformer for Math Word Problems [21.4761673982334]
This paper describes the design and implementation of a new machine learning model for online learning systems.
We aim at improving the intelligent level of the systems by enabling an automated math word problem solver.
arXiv Detail & Related papers (2022-08-11T05:27:05Z) - Towards Reformulating Essence Specifications for Robustness [6.497578221372429]
Essence is a rich language in which there are many equivalent ways to specify a given problem.
A user may omit the use of domain attributes or abstract types, resulting in fewer refinement rules being applicable.
This paper addresses the problem of recovering this information automatically to increase the robustness of the output constraint models.
arXiv Detail & Related papers (2021-11-01T10:51:47Z) - Structural Information Preserving for Graph-to-Text Generation [59.00642847499138]
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.
We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information.
Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
arXiv Detail & Related papers (2021-02-12T20:09:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.