gradient clipping is a popular modification to standard (stochastic) gradient
descent, at every iteration limiting the gradient norm to a certain value $c
>0$. It is widely used for example for stabilizing the training of deep
learning models (Goodfellow et al., 2016), or for enforcing