Faster Discrete Convex Function Minimization with Predictions: The
M-Convex Case
- URL: http://arxiv.org/abs/2306.05865v1
- Date: Fri, 9 Jun 2023 12:58:47 GMT
- Title: Faster Discrete Convex Function Minimization with Predictions: The
M-Convex Case
- Authors: Taihei Oki, Shinsaku Sakaue
- Abstract summary: Methods can improve time upon the best results by using predictions and even have potential to go beyond a lower-than-expected result.
Our framework is particularly effective for an important called the lamina minimization convex, which appears in research applications.
- Score: 15.191184049312467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen a growing interest in accelerating optimization
algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have
developed a general framework that warm-starts the L-convex function
minimization method with predictions, revealing the idea's usefulness for
various discrete optimization problems. In this paper, we present a framework
for using predictions to accelerate M-convex function minimization, thus
complementing previous research and extending the range of discrete
optimization algorithms that can benefit from predictions. Our framework is
particularly effective for an important subclass called laminar convex
minimization, which appears in many operations research applications. Our
methods can improve time complexity bounds upon the best worst-case results by
using predictions and even have potential to go beyond a lower-bound result.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Fast Pareto Optimization Using Sliding Window Selection [13.264683014487376]
We introduce a sliding window speed up technique for recently introduced algorithms.
We prove that our technique eliminates the population size as a crucial factor negatively impacting the runtime.
arXiv Detail & Related papers (2023-05-11T23:23:49Z) - Smoothed Analysis of Sequential Probability Assignment [16.090378928208885]
We study information-theoretically optimal minmax rates as well as a framework for algorithmic reduction involving the maximum likelihood estimator oracle.
Our approach establishes a general-purpose reduction from minimax rates for sequential probability assignment for smoothed adversaries to minimax rates for transductive learning.
arXiv Detail & Related papers (2023-03-08T19:25:57Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.