We study generalized bayesian inference under misspecification, i.e. when the
model is 'wrong but useful'. Generalized Bayes equips the likelihood with a
learning rate $\eta$. We show that for generalized linear models<
LR-GLM method based on low-rank approximation with Bayesian inference is proposed to improve efficiency of generalized linear models with large number of covariates, and the experiment results validate the effectiveness of LR-GLM on large-scale datasets.