Learning Backdoors for Mixed Integer Linear Programs with Contrastive Learning
- URL: http://arxiv.org/abs/2401.10467v2
- Date: Thu, 1 Aug 2024 01:07:35 GMT
- Title: Learning Backdoors for Mixed Integer Linear Programs with Contrastive Learning
- Authors: Junyang Cai, Taoan Huang, Bistra Dilkina,
- Abstract summary: Many real-world problems can be efficiently modeled as Mixed Linear Programs (MILPs) and solved with the Branch-and-Bound method.
Prior work has shown the existence of MILP backdoors, small sets of variables such that prioritizing branching on them when possible leads to faster running times.
Previous work learns to estimate the relative solver speed of randomly sampled backdoors through ranking and then decide whether to use the highest-ranked backdoor candidate.
In this paper, we utilize the Monte-Carlo tree search method to collect backdoors for training, rather than relying on random sampling, and adapt a contrastive
- Score: 13.106799330951842
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many real-world problems can be efficiently modeled as Mixed Integer Linear Programs (MILPs) and solved with the Branch-and-Bound method. Prior work has shown the existence of MILP backdoors, small sets of variables such that prioritizing branching on them when possible leads to faster running times. However, finding high-quality backdoors that improve running times remains an open question. Previous work learns to estimate the relative solver speed of randomly sampled backdoors through ranking and then decide whether to use the highest-ranked backdoor candidate. In this paper, we utilize the Monte-Carlo tree search method to collect backdoors for training, rather than relying on random sampling, and adapt a contrastive learning framework to train a Graph Attention Network model to predict backdoors. Our method, evaluated on several common MILP problem domains, demonstrates performance improvements over both Gurobi and previous models.
Related papers
- BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning [41.66647711306716]
We build a comprehensive benchmark of backdoor learning called BackdoorBench.
We provide an integrated implementation of state-of-the-art (SOTA) backdoor learning algorithms.
We conduct comprehensive evaluations with 5 poisoning ratios, based on 4 models and 4 datasets, leading to 11,492 pairs of attack-against-defense evaluations.
arXiv Detail & Related papers (2024-07-29T09:57:03Z) - ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search [50.45155830888697]
We develop a reinforced self-training approach, called ReST-MCTS*, based on integrating process reward guidance with tree search MCTS* for collecting higher-quality reasoning traces as well as per-step value to train policy and reward models.
We first show that the tree-search policy in ReST-MCTS* achieves higher accuracy compared with prior LLM reasoning baselines such as Best-of-N and Tree-of-Thought, within the same search budget.
arXiv Detail & Related papers (2024-06-06T07:40:00Z) - Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space [17.98191594223406]
We investigate the learning mechanisms of backdoor LMs in the frequency space by Fourier analysis.
We propose Multi-Scale Low-Rank Adaptation (MuScleLoRA), which deploys multiple radial scalings in the frequency space with low-rank adaptation to the target model.
MuScleLoRA reduces the average success rate of diverse backdoor attacks to below 15% across multiple datasets.
arXiv Detail & Related papers (2024-02-19T10:34:48Z) - Monte-Carlo Tree Search for Multi-Agent Pathfinding: Preliminary Results [60.4817465598352]
We introduce an original variant of Monte-Carlo Tree Search (MCTS) tailored to multi-agent pathfinding.
Specifically, we use individual paths to assist the agents with the the goal-reaching behavior.
We also use a dedicated decomposition technique to reduce the branching factor of the tree search procedure.
arXiv Detail & Related papers (2023-07-25T12:33:53Z) - Single Image Backdoor Inversion via Robust Smoothed Classifiers [76.66635991456336]
We present a new approach for backdoor inversion, which is able to recover the hidden backdoor with as few as a single image.
In this work, we present a new approach for backdoor inversion, which is able to recover the hidden backdoor with as few as a single image.
arXiv Detail & Related papers (2023-03-01T03:37:42Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - Improving Passage Retrieval with Zero-Shot Question Generation [109.11542468380331]
We propose a simple and effective re-ranking method for improving passage retrieval in open question answering.
The re-ranker re-scores retrieved passages with a zero-shot question generation model, which uses a pre-trained language model to compute the probability of the input question conditioned on a retrieved passage.
arXiv Detail & Related papers (2022-04-15T14:51:41Z) - Finding Backdoors to Integer Programs: A Monte Carlo Tree Search
Framework [22.824450460839245]
A backdoor is a small subset of an instance's integer variables with the following property.
We propose BaMCTS, a Monte Carlo Tree Search framework for finding backdoors to MIPs.
arXiv Detail & Related papers (2021-10-16T00:36:53Z) - Learning Pseudo-Backdoors for Mixed Integer Programs [48.36587539004464]
We propose a machine learning approach for solving Mixed Programs (MIP) by learning to prioritize a set of decision variables, which we call pseudo-backdoors, for branching that results in faster solution times.
Our approach takes inspiration from the concept of strong backdoors, which corresponds to a small set of variables such that only branching on these variables yields an optimal integral solution and a proof of optimality.
A key advantage of pseudo-backdoors over strong backdoors is that they are much amenable to data-driven identification or prediction.
arXiv Detail & Related papers (2021-06-09T13:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.