TY - GEN
T1 - Positive/Negative Approximate Multipliers for DNN Accelerators
AU - Spantidi, Ourania
AU - Zervakis, Georgios
AU - Anagnostopoulos, Iraklis
AU - Amrouch, Hussam
AU - Henkel, Jorg
N1 - Publisher Copyright:
©2021 IEEE
PY - 2021
Y1 - 2021
N2 - Recent Deep Neural Networks (DNNs) manage to deliver superhuman accuracy levels on many AI tasks. DNN accelerators are becoming integral components of modern systems-on-chips. DNNs perform millions of arithmetic operations per inference and DNN accelerators integrate thousands of multiply-accumulate units leading to increased energy requirements. To lower the energy consumption of DNN accelerators, approximate computing principles are employed. However, complex DNNs can be increasingly sensitive to approximation. In this work, we present a dynamically configurable approximate multiplier that supports three operation modes, i.e., exact, positive error, and negative error. In addition, we propose a filter-oriented approximation method to map the weights to the appropriate modes of the approximate multiplier. Our mapping algorithm balances the positive with the negative errors due to the approximate multiplications, aiming at maximizing the energy reduction while minimizing the overall convolution error. We evaluate our approach on multiple DNNs and datasets against state-of-the-art approaches, where our method achieves 18.33% energy gains on average across 7 NNs on 4 different datasets for a maximum accuracy drop of only 1%.
AB - Recent Deep Neural Networks (DNNs) manage to deliver superhuman accuracy levels on many AI tasks. DNN accelerators are becoming integral components of modern systems-on-chips. DNNs perform millions of arithmetic operations per inference and DNN accelerators integrate thousands of multiply-accumulate units leading to increased energy requirements. To lower the energy consumption of DNN accelerators, approximate computing principles are employed. However, complex DNNs can be increasingly sensitive to approximation. In this work, we present a dynamically configurable approximate multiplier that supports three operation modes, i.e., exact, positive error, and negative error. In addition, we propose a filter-oriented approximation method to map the weights to the appropriate modes of the approximate multiplier. Our mapping algorithm balances the positive with the negative errors due to the approximate multiplications, aiming at maximizing the energy reduction while minimizing the overall convolution error. We evaluate our approach on multiple DNNs and datasets against state-of-the-art approaches, where our method achieves 18.33% energy gains on average across 7 NNs on 4 different datasets for a maximum accuracy drop of only 1%.
KW - Approximate Computing
KW - Deep Neural Networks
KW - Low Power
KW - Multipliers
UR - http://www.scopus.com/inward/record.url?scp=85124136984&partnerID=8YFLogxK
U2 - 10.1109/ICCAD51958.2021.09643491
DO - 10.1109/ICCAD51958.2021.09643491
M3 - Conference contribution
AN - SCOPUS:85124136984
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - 2021 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 40th IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2021
Y2 - 1 November 2021 through 4 November 2021
ER -