Exploiting sparsity in deep neural networks (DNNs) has been a promising area to meet the growing computation need of modern DNNs. However, in practice, sparse dnn acceleration still faces a key challenge. To minimize the overhead of sparse acceleration, hardware designers have proposed