This paper studies how to improve the performance of Low-Rank Adaption (LoRA) as guided by our Theoretical Analysis. Our first set of theoretical results show that for random initialization and linear models, \textit{i)} LoRA will align to the certain singular subspace of one-step grad