Hyper-parameter optimization in classification: To-do or not-to-do

Ngoc Tran, Jean Guy Schneider, Ingo Weber, A. K. Qin

Research output: Contribution to journalArticlepeer-review

39 Scopus citations

Abstract

Hyper-parameter optimization is a process to find suitable hyper-parameters for predictive models. It typically incurs highly demanding computational costs due to the need of the time-consuming model training process to determine the effectiveness of each set of candidate hyper-parameter values. A priori, there is no guarantee that hyper-parameter optimization leads to improved performance. In this work, we propose a framework to address the problem of whether one should apply hyper-parameter optimization or use the default hyper-parameter settings for traditional classification algorithms. We implemented a prototype of the framework, which we use a basis for a three-fold evaluation with 486 datasets and 4 algorithms. The results indicate that our framework is effective at supporting modeling tasks in avoiding adverse effects of using ineffective optimizations. The results also demonstrate that incrementally adding training datasets improves the predictive performance of framework instantiations and hence enables “life-long learning.”

Original languageEnglish
Article number107245
JournalPattern Recognition
Volume103
DOIs
StatePublished - Jul 2020
Externally publishedYes

Keywords

  • Bayesian optimization
  • Framework
  • Hyper-parameter optimization
  • Incremental learning
  • Machine learning

Fingerprint

Dive into the research topics of 'Hyper-parameter optimization in classification: To-do or not-to-do'. Together they form a unique fingerprint.

Cite this