Overparameterized neural networks (NNs) are observed to generalize well even
when trained to perfectly fit noisy data. This phenomenon motivated a large
body of work on "benign overfitting", where interpolating predictors achieve
near-optimal performance. Recently, it was conjectured a