We reconsider the stochastic (sub)gradient approach to the unconstrained primal l1-svm optimization. We observe that if the learning rate is inversely proportional to the number of steps, i.e., the number of times any training pattern is presented to the algorithm, the update rule may