HW-FlowQ: A Multi-Abstraction Level HW-CNN Co-design Quantization Methodology

Nael Fasfous, Manoj Rohit Vemparala, Alexander Frickenstein, Emanuele Valpreda, Driton Salihu, Nguyen Anh Vu Doan, Christian Unger, Naveen Shankar Nagaraja, Maurizio Martina, Walter Stechele

Publikation: Beitrag in FachzeitschriftArtikelBegutachtung

8 Zitate (Scopus)

Abstract

Model compression through quantization is commonly applied to convolutional neural networks (CNNs) deployed on compute and memory-constrained embedded platforms. Different layers of the CNN can have varying degrees of numerical precision for both weights and activations, resulting in a large search space. Together with the hardware (HW) design space, the challenge of finding the globally optimal HW-CNN combination for a given application becomes daunting. To this end, we propose HW-FlowQ, a systematic approach that enables the co-design of the target hardware platform and the compressed CNN model through quantization. The search space is viewed at three levels of abstraction, allowing for an iterative approach for narrowing down the solution space before reaching a high-fidelity CNN hardware modeling tool, capable of capturing the effects of mixed-precision quantization strategies on different hardware architectures (processing unit counts, memory levels, cost models, dataflows) and two types of computation engines (bit-parallel vectorized, bit-serial). To combine both worlds, a multi-objective non-dominated sorting genetic algorithm (NSGA-II) is leveraged to establish a Pareto-optimal set of quantization strategies for the target HW-metrics at each abstraction level. HW-FlowQ detects optima in a discrete search space and maximizes the task-related accuracy of the underlying CNN while minimizing hardware-related costs. The Pareto-front approach keeps the design space open to a range of non-dominated solutions before refining the design to a more detailed level of abstraction. With equivalent prediction accuracy, we improve the energy and latency by 20% and 45% respectively for ResNet56 compared to existing mixed-precision search methods.

OriginalspracheEnglisch
Aufsatznummer66
FachzeitschriftACM Transactions on Embedded Computing Systems
Jahrgang20
Ausgabenummer5s
DOIs
PublikationsstatusVeröffentlicht - Okt. 2021

Fingerprint

Untersuchen Sie die Forschungsthemen von „HW-FlowQ: A Multi-Abstraction Level HW-CNN Co-design Quantization Methodology“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren