Leveraging Highly Approximated Multipliers in DNN Inference

Georgios Zervakis, Fabio Frustaci, Ourania Spantidi, Iraklis Anagnostopoulos, Hussam Amrouch, Jorg Henkel

Research output: Contribution to journalArticlepeer-review

Abstract

In this work, we present our control variate approximation technique that enables the exploitation of highly approximate multipliers in Deep Neural Network (DNN) accelerators. Our approach does not require retraining and significantly decreases the induced error due to approximate multiplications, improving the overall inference accuracy. As a result, control variate approximation enables satisfying tight accuracy loss constraints while boosting the power savings. Our experimental evaluation, across six different DNNs and several approximate multipliers, demonstrates the versatility of control variate technique and shows that compared to the accurate design, it achieves the same performance, 45% power reduction, and less than 1% average accuracy loss. Compared to the corresponding approximate designs without using our technique, the error-correction of the control variate method improves the accuracy by 1.9x on average.

Original languageEnglish
Pages (from-to)47897-47911
Number of pages15
JournalIEEE Access
Volume13
DOIs
StatePublished - 2025

Keywords

  • Approximate computing
  • approximate multipliers
  • control variate
  • deep neural networks
  • error correction
  • low power

Fingerprint

Dive into the research topics of 'Leveraging Highly Approximated Multipliers in DNN Inference'. Together they form a unique fingerprint.

Cite this