TY - JOUR AU - Wang, Oliver AU - Dovrolis, Constantine AU - Lee, Jaeho PY - 2022/02/28 Y2 - 2024/03/29 TI - LSOP: Layer-Scaled One-shot Pruning: A Simple and Effective Deep Pruning Framework for Neural Networks JF - Journal of Student Research JA - J Stud Res VL - 11 IS - 1 SE - DO - 10.47611/jsrhs.v11i1.2587 UR - https://www.jsr.org/hs/index.php/path/article/view/2587 SP - AB - <p>Neural network pruning is a technique that removes unnecessary weight parameters from a network to decrease its memory and computational requirements. Many different pruning techniques have been proposed to reduce networks with over 90% shrinkage in size while minimizing accuracy loss. This paper aims to establish a framework that can generalize the mechanism among various pruning techniques, which can be used to guide users to design better deep pruning methods in the future. With some basic concepts and findings from data matrix approximation, the framework can explain the success of the state-of-the-art methods as well as a more generalized pruning design (Layer-Scaled One-shot Pruning, LSOP) proposed in this work. After pruning with different algorithms and measuring their accuracies, the researcher also found that those methods or algorithms aligned with the proposed framework were more accurate at sparser networks (density &lt; 10%) than the methods that did not. This suggests that future research into neural network pruning can focus on the proposed framework, which has the potential to accelerate the development of pruning technology and adoption of more efficient neural networks.</p><p>&nbsp;</p><p>The LSOP framework’s capability to explain strong pruning performances implies that polynomial decay and Low-Rank Matrix Approximation techniques from the field of data science can provide support for neural network pruning.</p> ER -