Many large-scale machine learning (ML) applications need to perform
decentralized learning over datasets generated at different devices and
locations. Such datasets pose a significant challenge to decentralized learning
Federated Learning is proposed as an alternative to logging and training in a data center by aggregating locally-computed updates on mobile devices to improve the user experience. The approach is shown to be robust to non-IID data distributions and reduce required communication rounds by 10-100x compared to synchronized stochastic gradient descent.