Abstract
Hyper-parameter optimization is a process to find suitable hyper-parameters for predictive models. It typically incurs highly demanding computational costs due to the need of the time-consuming model training process to determine the effectiveness of each set of candidate hyper-parameter values. A priori, there is no guarantee that hyper-parameter optimization leads to improved performance. In this work, we propose a framework to address the problem of whether one should apply hyper-parameter optimization or use the default hyper-parameter settings for traditional classification algorithms. We implemented a prototype of the framework, which we use a basis for a three-fold evaluation with 486 datasets and 4 algorithms. The results indicate that our framework is effective at supporting modeling tasks in avoiding adverse effects of using ineffective optimizations. The results also demonstrate that incrementally adding training datasets improves the predictive performance of framework instantiations and hence enables “life-long learning.”
Original language | English |
---|---|
Article number | 107245 |
Journal | Pattern Recognition |
Volume | 103 |
DOIs | |
State | Published - Jul 2020 |
Externally published | Yes |
Keywords
- Bayesian optimization
- Framework
- Hyper-parameter optimization
- Incremental learning
- Machine learning