TY - GEN
T1 - Convolutional Neural Networks with Layer Reuse
AU - Kopuklu, Okan
AU - Babaee, Maryam
AU - Hormann, Stefan
AU - Rigoll, Gerhard
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - A convolutional layer in a Convolutional Neural Network (CNN) consists of many filters which apply convolution operation to the input, capture some special patterns and pass the result to the next layer. If the same patterns also occur at the deeper layers of the network, why wouldn't the same convolutional filters be used also in those layers. In this paper, we propose a CNN architecture, Layer Reuse Network (LruNet), where the convolutional layers are used repeatedly without the need of introducing new layers to get a better performance. This approach introduces several advantages: (i) Considerable amount of parameters are saved since we are reusing the layers instead of introducing new layers, (ii) the Memory Access Cost (MAC) can be reduced since reused layer parameters can be fetched only once, (iii) the number of nonlinearities increases with layer reuse, and (iv) reused layers get gradient updates from multiple parts of the network. The proposed approach is evaluated on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets for image classification task, and layer reuse improves the performance by 5.14%, 5.85% and 2.29%, respectively. The source code and pretrained models are publicly available 1.
AB - A convolutional layer in a Convolutional Neural Network (CNN) consists of many filters which apply convolution operation to the input, capture some special patterns and pass the result to the next layer. If the same patterns also occur at the deeper layers of the network, why wouldn't the same convolutional filters be used also in those layers. In this paper, we propose a CNN architecture, Layer Reuse Network (LruNet), where the convolutional layers are used repeatedly without the need of introducing new layers to get a better performance. This approach introduces several advantages: (i) Considerable amount of parameters are saved since we are reusing the layers instead of introducing new layers, (ii) the Memory Access Cost (MAC) can be reduced since reused layer parameters can be fetched only once, (iii) the number of nonlinearities increases with layer reuse, and (iv) reused layers get gradient updates from multiple parts of the network. The proposed approach is evaluated on CIFAR-10, CIFAR-100 and Fashion-MNIST datasets for image classification task, and layer reuse improves the performance by 5.14%, 5.85% and 2.29%, respectively. The source code and pretrained models are publicly available 1.
KW - convolutional neural networks
KW - inference routing
KW - layer reuse
UR - http://www.scopus.com/inward/record.url?scp=85076800730&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2019.8802998
DO - 10.1109/ICIP.2019.8802998
M3 - Conference contribution
AN - SCOPUS:85076800730
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 345
EP - 349
BT - 2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings
PB - IEEE Computer Society
T2 - 26th IEEE International Conference on Image Processing, ICIP 2019
Y2 - 22 September 2019 through 25 September 2019
ER -