Safe Learning under Uncertain Objectives and Constraints
- URL: http://arxiv.org/abs/2006.13326v1
- Date: Tue, 23 Jun 2020 20:51:00 GMT
- Title: Safe Learning under Uncertain Objectives and Constraints
- Authors: Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi,
Hamed Hassani
- Abstract summary: We consider non-textitunknown yet safety-critical optimization problems under textitunknown yet safety-critical constraints.
Such problems naturally arise in a variety of domains including robotics, manufacturing, and medical procedures.
A crucial component of our analysis is to introduce and apply a technique called shrinkage in the context of safe optimization.
- Score: 66.05180398174286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider non-convex optimization problems under
\textit{unknown} yet safety-critical constraints. Such problems naturally arise
in a variety of domains including robotics, manufacturing, and medical
procedures, where it is infeasible to know or identify all the constraints.
Therefore, the parameter space should be explored in a conservative way to
ensure that none of the constraints are violated during the optimization
process once we start from a safe initialization point. To this end, we develop
an algorithm called Reliable Frank-Wolfe (Reliable-FW). Given a general
non-convex function and an unknown polytope constraint, Reliable-FW
simultaneously learns the landscape of the objective function and the boundary
of the safety polytope. More precisely, by assuming that Reliable-FW has access
to a (stochastic) gradient oracle of the objective function and a noisy
feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate
first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$
gradient oracle complexity (resp. $\tilde{\mathcal{O}}({1}/{\epsilon^3})$ (also
optimal) in the stochastic gradient setting), while ensuring the safety of all
the iterates. Rather surprisingly, Reliable-FW only makes
$\tilde{\mathcal{O}}(({d^2}/{\epsilon^2})\log 1/\delta)$ queries to the noisy
feasibility oracle (resp. $\tilde{\mathcal{O}}(({d^2}/{\epsilon^4})\log
1/\delta)$ in the stochastic gradient setting) where $d$ is the dimension and
$\delta$ is the reliability parameter, tightening the existing bounds even for
safe minimization of convex functions. We further specialize our results to the
case that the objective function is convex. A crucial component of our analysis
is to introduce and apply a technique called geometric shrinkage in the context
of safe optimization.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.