BriefGPT.xyz
Oct, 2019
可分解网络:神经网络可扩展低秩压缩
Scalable Deep Neural Networks via Low-Rank Matrix Factorization
HTML
PDF
Atsushi Yaguchi, Taiji Suzuki, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa
TL;DR
本文介绍了一种名为Decomposable-Net的深度神经网络压缩方法,通过奇异值分解和调整矩阵秩,允许灵活改变模型大小,而无需进行微调,能够在多种模型大小下提高模型性能。
Abstract
compressing
deep neural networks (
dnns
) is important for real-world applications operating on resource-constrained devices. However, it is difficult to change the model size once the training is completed, which
→