Abstract: In view of training increasingly complex learning architectures, we establish
a nonsmooth implicit function theorem with an operational calculus. Our result
applies to most practical problems (i.e., definable problems) provided that a
nonsmooth form of the classical invertibility condition is fulfilled. This
approach allows for formal subdifferentiation: for instance, replacing
derivatives by Clarke Jacobians in the usual differentiation formulas is fully
justified for a wide class of nonsmooth problems. Moreover this calculus is
entirely compatible with algorithmic differentiation (e.g., backpropagation).
We provide several applications such as training deep equilibrium networks,
training neural nets with conic optimization layers, or hyperparameter-tuning
for nonsmooth Lasso-type models. To show the sharpness of our assumptions, we
present numerical experiments showcasing the extremely pathological gradient
dynamics one can encounter when applying implicit algorithmic differentiation
without any hypothesis.