Regularization Strength Impact on Neural Network Ensembles

Cedrique Rovile Njieutcheu Tassi, Anko Börner, Rudolph Triebel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In the last decade, several approaches have been proposed for regularizing deeper and wider neural networks (NNs), which is of importance in areas like image classification. It is now common practice to incorporate several regularization approaches in the training procedure of NNs. However, the impact of regularization strength on the properties of an ensemble of NNs remains unclear. For this reason, the study empirically compared the impact of NNs built based on two different regularization strengths (weak regularization (WR) and strong regularization (SR)) on the properties of an ensemble, such as the magnitude of logits, classification accuracy, calibration error, and ability to separate true predictions (TPs) and false predictions (FPs). The comparison was based on results from different experiments conducted on three different models, datasets, and architectures. Experimental results show that the increase in regularization strength 1) reduces the magnitude of logits; 2) can increase or decrease the classification accuracy depending on the dataset and/or architecture; 3) increases the calibration error; and 4) can improve or harm the separability between TPs and FPs depending on the dataset, architecture, model type and/or FP type.

Original languageEnglish
Title of host publicationACAI 2022 - Conference Proceedings
Subtitle of host publication2022 5th International Conference on Algorithms, Computing and Artificial Intelligence
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450398343
DOIs
StatePublished - 23 Dec 2022
Externally publishedYes
Event5th International Conference on Algorithms, Computing and Artificial Intelligence, ACAI 2022 - Sanya, China
Duration: 23 Dec 202225 Dec 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference5th International Conference on Algorithms, Computing and Artificial Intelligence, ACAI 2022
Country/TerritoryChina
CitySanya
Period23/12/2225/12/22

Keywords

  • Calibration error
  • Ensemble
  • Mixture of Monte Carlo Dropout (MMCD)
  • Monte Carlo Dropout (MCD)
  • Quality of uncertainty
  • Regularization strength
  • Separating true predictions (TPs) and false predictions (FPs)

Fingerprint

Dive into the research topics of 'Regularization Strength Impact on Neural Network Ensembles'. Together they form a unique fingerprint.

Cite this