BriefGPT.xyz
Oct, 2022
通过层间内核共享大幅减少深度卷积神经网络中可训练参数数量
Drastically Reducing the Number of Trainable Parameters in Deep CNNs by Inter-layer Kernel-sharing
HTML
PDF
Alireza Azadbakht, Saeed Reza Kheradpisheh, Ismail Khalfaoui-Hassani, Timothée Masquelier
TL;DR
提出在深度卷积神经网络中通过共享卷积层核来减少可训练参数数量和内存占用的方法,既能够缓解边缘计算内存限制,又能有效地防止过拟合。实验证明该方法能够在保持精度的情况下大幅减少模型大小。
Abstract
deep convolutional neural networks
(DCNNs) have become the state-of-the-art (SOTA) approach for many computer vision tasks: image classification, object detection, semantic segmentation, etc. However, most SOTA networks are too large for
→