A recent line of work has established intriguing connections between the
generalization/compression properties of a deep neural network (DNN) model and
the so-called layer weights' stable ranks. Intuitively, the latter are
indicators of the effective number of parameters in the net. In