TL;DRReLU shallow neural networks can uniformly approximate functions from the H"older space with rates close to the optimal one in high dimensions.
Abstract
neural networks activated by the rectified linear unit (ReLU) play a central role in the recent development of deep learning. The topic of approximating functions from H\"older spaces by these networks is crucial