Differentiable Greedy Submodular Maximization: Guarantees, Gradient
  Estimators, and Applications
        - URL: http://arxiv.org/abs/2005.02578v4
 - Date: Fri, 12 Jun 2020 01:07:08 GMT
 - Title: Differentiable Greedy Submodular Maximization: Guarantees, Gradient
  Estimators, and Applications
 - Authors: Shinsaku Sakaue
 - Abstract summary: We establish a theoretically guaranteed versatile framework that makes the greedy algorithm for monotone subular function differentiable.
We smooth the greedy algorithm via randomization, and prove that it almost recovers original approximation guarantees in expectation for the cases of cardinality and $kappa$-extensible system constrains.
 - Score: 15.191184049312467
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Motivated by, e.g., sensitivity analysis and end-to-end learning, the demand
for differentiable optimization algorithms has been significantly increasing.
In this paper, we establish a theoretically guaranteed versatile framework that
makes the greedy algorithm for monotone submodular function maximization
differentiable. We smooth the greedy algorithm via randomization, and prove
that it almost recovers original approximation guarantees in expectation for
the cases of cardinality and $\kappa$-extensible system constrains. We also
show how to efficiently compute unbiased gradient estimators of any expected
output-dependent quantities. We demonstrate the usefulness of our framework by
instantiating it for various applications.
 
       
      
        Related papers
        - An Enhanced Zeroth-Order Stochastic Frank-Wolfe Framework for   Constrained Finite-Sum Optimization [15.652261277429968]
We propose an enhanced zeroth-order convex computation Frank-Wolfe to address constrained finite-sum optimization problems.
Our method introduces a novel double variance reduction framework that effectively reduces the approximation induced by zeroth-order oracles.
arXiv  Detail & Related papers  (2025-01-13T10:53:19Z) - An Accelerated Block Proximal Framework with Adaptive Momentum for
  Nonconvex and Nonsmooth Optimization [2.323238724742687]
We propose an accelerated block proximal linear framework with adaptive momentum (ABPL$+$) for nonsmooth and nonsmooth optimization.
We analyze the potential causes of the extrapolation step failing in some algorithms, and resolve this issue by enhancing the comparison process.
We extend our algorithm to any scenario involving updating the gradient step and the linear extrapolation step.
arXiv  Detail & Related papers  (2023-08-23T13:32:31Z) - One-step differentiation of iterative algorithms [7.9495796547433395]
We study one-step differentiation, also known as Jacobian-free backpropagation, a method as easy as automatic differentiation.
We provide a complete theoretical approximation analysis with specific examples along with its consequences in bilevel optimization.
arXiv  Detail & Related papers  (2023-05-23T07:32:37Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
  Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv  Detail & Related papers  (2022-10-04T17:13:45Z) - A Unified Convergence Theorem for Stochastic Optimization Methods [4.94128206910124]
We provide a fundamental unified convergence theorem used for deriving convergence results for a series of unified optimization methods.
As a direct application, we recover almost sure convergence results under general settings.
arXiv  Detail & Related papers  (2022-06-08T14:01:42Z) - Robust Subset Selection by Greedy and Evolutionary Pareto Optimization [23.0838604893412]
Subset selection aims to select a subset from a ground set to maximize some objective function.
We show that a greedy algorithm can obtain an approximation ratio of $1-e-betagamma$, where $beta$ and $gamma$ are the correlation and submodularity ratios of the objective functions.
arXiv  Detail & Related papers  (2022-05-03T11:00:54Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv  Detail & Related papers  (2021-11-29T15:10:09Z) - A theoretical and empirical study of new adaptive algorithms with
  additional momentum steps and shifted updates for stochastic non-convex
  optimization [0.0]
It is thought that adaptive optimization algorithms represent the key pillar behind the of the Learning field.
In this paper we introduce adaptive momentum techniques for different non-smooth objective problems.
arXiv  Detail & Related papers  (2021-10-16T09:47:57Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv  Detail & Related papers  (2021-06-15T15:16:28Z) - High Probability Complexity Bounds for Non-Smooth Stochastic   Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv  Detail & Related papers  (2021-06-10T17:54:21Z) - Efficient and Modular Implicit Differentiation [68.74748174316989]
We propose a unified, efficient and modular approach for implicit differentiation of optimization problems.
We show that seemingly simple principles allow to recover many recently proposed implicit differentiation methods and create new ones easily.
arXiv  Detail & Related papers  (2021-05-31T17:45:58Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
  Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv  Detail & Related papers  (2020-12-21T17:29:58Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.