In this note we give a simple proof for the convergence of stochastic gradient (SGD) methods on $\mu$-strongly convex functions under a (milder than standard) $L$-smoothness assumption. We show that SGD converges after $T$ iterations as $O\left( L \|x_0-x^\star\|^2 \exp \bigl[-\frac{\m