Convolutional Neural Networks with analytically determined Filters

Matthias Kissel, Klaus Diepold

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

In this paper, we propose a new training algorithm for Convolutional Neural Networks (CNNs) based on well-known training methods for neural networks with random weights. Our algorithm analytically determines the filters of the convolutional layers by solving a least squares problem using the Moore-Penrose generalized inverse. The resulting algorithm does not suffer from convergence issues and the training time is drastically reduced compared to traditional CNN training using gradient descent. We validate our algorithm with several standard datasets (MNIST, FashionMNIST and CIFAR10) and show that CNNs trained with our method outperform previous approaches with random or unsupervisedly learned filters in terms of test pre-diction accuracy. Moreover, our approach is up to 25 times faster than training CNNs with equivalent architecture using a gradient-descent based algorithm.

Original languageEnglish
Title of host publication2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728186719
DOIs
StatePublished - 2022
Event2022 International Joint Conference on Neural Networks, IJCNN 2022 - Padua, Italy
Duration: 18 Jul 202223 Jul 2022

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2022-July

Conference

Conference2022 International Joint Conference on Neural Networks, IJCNN 2022
Country/TerritoryItaly
CityPadua
Period18/07/2223/07/22

Keywords

  • Convolutional Neural Network
  • Efficient Training
  • Gradient-Free Training
  • Pseudo-Inverse

Fingerprint

Dive into the research topics of 'Convolutional Neural Networks with analytically determined Filters'. Together they form a unique fingerprint.

Cite this