We study the phenomenon that some modules of deep neural networks (DNNs) are more \emph{critical} than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's per