Fine-Grained Power Modeling of Multicore Processors Using FFNNs

Mark Sagi, Nguyen Anh Vu Doan, Nael Fasfous, Thomas Wild, Andreas Herkersdorf

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

To minimize power consumption while maximizing performance, today’s multicore processors rely on fine-grained run-time dynamic power information—both in the time domain, e.g. μs to ms, and space domain, e.g. core-level. The state-of-the-art for deriving such power information is mainly based on predetermined power models which use linear modeling techniques to determine the core-performance/core-power relationship. However, with multicore processors becoming ever more complex, linear modeling techniques cannot capture all possible core-performance related power states anymore. Although artificial neural networks (ANN) have been proposed for coarse-grained power modeling of servers with time resolutions in the range of seconds, few works have yet investigated fine-grained ANN-based power modeling. In this paper, we explore feed-forward neural networks (FFNNs) for core-level power modeling with estimation rates in the range of 10 kHz. To achieve a high estimation accuracy while minimizing run-time overhead, we propose a multi-objective-optimization of the neural architecture using NSGA-II with the FFNNs being trained on performance counter and power data from a complex-out-of-order processor architecture. We show that relative power estimation error for the highest accuracy FFNN decreases on average by 7.5% compared to a state-of-the-art linear power modeling approach and decreases by 5.5% compared to a multivariate polynomial regression model. For the FFNNs optimized for both accuracy and overhead, the average error decreases between 4.1% and 6.7% compared to linear modeling while offering significantly lower overhead compared to the highest accuracy FFNN. Furthermore, we propose a micro-controller-based and an accelerator-based implementation for run-time inference of the power modeling FFNN and show that the area overhead is negligible.

Original languageEnglish
Pages (from-to)243-266
Number of pages24
JournalInternational Journal of Parallel Programming
Volume50
Issue number2
DOIs
StatePublished - Apr 2022

Keywords

  • ANN
  • Accuracy
  • Artificial neural network
  • Core-level
  • Error
  • Estimation
  • FFNN
  • Modeling
  • Multi-objective-optimization
  • Multicore
  • NSGA-II
  • Overhead
  • Power
  • Processor

Fingerprint

Dive into the research topics of 'Fine-Grained Power Modeling of Multicore Processors Using FFNNs'. Together they form a unique fingerprint.

Cite this