The Integer Linear Programming Inference Cookbook
- URL: http://arxiv.org/abs/2307.00171v1
- Date: Fri, 30 Jun 2023 23:33:11 GMT
- Title: The Integer Linear Programming Inference Cookbook
- Authors: Vivek Srikumar, Dan Roth
- Abstract summary: This survey is meant to guide the reader through the process of framing a new inference problem as an instance of an integer linear program.
At the end, we will see two worked examples to illustrate the use of these recipes.
- Score: 108.82092464025231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the years, integer linear programs have been employed to model inference
in many natural language processing problems. This survey is meant to guide the
reader through the process of framing a new inference problem as an instance of
an integer linear program and is structured as a collection of recipes. At the
end, we will see two worked examples to illustrate the use of these recipes.
Related papers
- Human-AI Co-Creation of Worked Examples for Programming Classes [1.5663705658818543]
We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations.
We also present a study that assesses the quality of explanations created with this approach.
arXiv Detail & Related papers (2024-02-26T01:44:24Z) - One-for-many Counterfactual Explanations by Column Generation [10.722820966396192]
We consider the problem of generating a set of counterfactual explanations for a group of instances.
For the first time, we solve the problem of minimizing the number of explanations needed to explain all the instances.
A novel column generation framework is developed to efficiently search for the explanations.
arXiv Detail & Related papers (2024-02-12T10:03:31Z) - Authoring Worked Examples for Java Programming with Human-AI
Collaboration [1.5663705658818543]
We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations.
We also present a study that assesses the quality of explanations created with this approach.
arXiv Detail & Related papers (2023-12-04T18:32:55Z) - Leveraging Causal Inference for Explainable Automatic Program Repair [24.146216081282798]
This paper presents an interpretable approach for program repair based on sequence-to-sequence models with causal inference.
Our method is called CPR, short for causal program repair.
Experiments on four programming languages show that CPR can generate causal graphs for reasonable interpretations.
arXiv Detail & Related papers (2022-05-26T13:25:33Z) - Structural Analysis of Branch-and-Cut and the Learnability of Gomory
Mixed Integer Cuts [88.94020638263467]
The incorporation of cutting planes within the branch-and-bound algorithm, known as branch-and-cut, forms the backbone of modern integer programming solvers.
We conduct a novel structural analysis of branch-and-cut that pins down how every step of the algorithm is affected by changes in the parameters defining the cutting planes added to the input integer program.
Our main application of this analysis is to derive sample complexity guarantees for using machine learning to determine which cutting planes to apply during branch-and-cut.
arXiv Detail & Related papers (2022-04-15T03:32:40Z) - Polyjuice: Automated, General-purpose Counterfactual Generation [37.152326506591876]
We propose to disentangle counterfactual generation from its use cases, i.e., gather general-purpose counterfactuals first, and then select them for specific applications.
We frame the automated counterfactual generation as text generation, and finetune GPT-2 into a generator, Polyjuice, which produces fluent and diverse counterfactuals.
arXiv Detail & Related papers (2021-01-01T18:34:22Z) - When is Memorization of Irrelevant Training Data Necessary for
High-Accuracy Learning? [53.523017945443115]
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples.
Our results do not depend on the training algorithm or the class of models used for learning.
arXiv Detail & Related papers (2020-12-11T15:25:14Z) - The Extraordinary Failure of Complement Coercion Crowdsourcing [50.599433903377374]
Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years.
We aim to collect annotated data for this phenomenon by reducing it to either of two known tasks: Explicit Completion and Natural Language Inference.
In both cases, crowdsourcing resulted in low agreement scores, even though we followed the same methodologies as in previous work.
arXiv Detail & Related papers (2020-10-12T19:04:04Z) - Incomplete Utterance Rewriting as Semantic Segmentation [57.13577518412252]
We present a novel and extensive approach, which formulates it as a semantic segmentation task.
Instead of generating from scratch, such a formulation introduces edit operations and shapes the problem as prediction of a word-level edit matrix.
Our approach is four times faster than the standard approach in inference.
arXiv Detail & Related papers (2020-09-28T09:29:49Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.