Targeting DNN Inference Via Efficient Utilization of Heterogeneous Precision DNN Accelerators

Ourania Spantidi, Georgios Zervakis, Sami Alsalamin, Isai Roman-Ballesteros, Jorg Henkel, Hussam Amrouch, Iraklis Anagnostopoulos

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Modern applications rely more and more on the simultaneous execution of multiple DNNs, and Heterogeneous DNN Accelerators (HDAs) prevail as a solution to this trend. In this work, we propose, implement, and evaluate low precision Neural Processing Units (NPUs) which serve as building blocks to construct HDAs, to address the efficient deployment of multi-DNN workloads. Moreover, we design and evaluate HDA designs that increase the overall throughput, while reducing the energy consumption during NN inference. At the design time, we implement HDAs inspired by the big.LITTLE computing paradigm, consisting of 8-bit NPUs together with lower precision bit-width NPUs. Additionally, an NN-to-NPU scheduling methodology is implemented to decide at run-time how to map the executed NN to the suitable NPU based on an accuracy drop threshold value. Our hardware/software co-design reduces the energy and response time of NNs by 29% and 10% respectively when compared to state-of-the-art homogeneous architectures. This comes with a negligible accuracy drop of merely 0.5%. Similar to the traditional CPU big.LITTLE, our asymmetric NPU design can open new doors for designing novel DNN accelerator architectures, due to their profound role in increasing the efficiency of DNNs with minimal losses in accuracy.

Original languageEnglish
Pages (from-to)112-125
Number of pages14
JournalIEEE Transactions on Emerging Topics in Computing
Volume11
Issue number1
DOIs
StatePublished - 1 Jan 2023
Externally publishedYes

Keywords

  • Approximate computing
  • deep neural networks
  • hardware-software co-design
  • heterogeneous (approximate) accelerators
  • low-power
  • systolic MAC array

Fingerprint

Dive into the research topics of 'Targeting DNN Inference Via Efficient Utilization of Heterogeneous Precision DNN Accelerators'. Together they form a unique fingerprint.

Cite this