TY - GEN
T1 - An efficient FPGA accelerator design for optimized CNNs using OpenCL
AU - Vemparala, Manoj Rohit
AU - Frickenstein, Alexander
AU - Stechele, Walter
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2019.
PY - 2019
Y1 - 2019
N2 - Convolutional Neural Networks (CNNs) require highly parallel Hardware (HW) accelerators in the form of Graphical Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to build low latency solutions necessary for implementing image processing applications. FPGAs have the ability to provide a right balance between flexibility, performance and energy efficiency. The design of FPGA based accelerator design traditionally required a tedious Register Transfer Level (RTL) design flow process. To improve design productivity, the proposed work uses High-Level Synthesis (HLS), described in OpenCL, to generate the FPGA bitstream for the CNN model. The 2D Winograd transformation is integrated in the pipeline to reduce the overall number of Multiply and Accumulate (MAC) operations in the CNN. Instead of increasing the batch size to improve the throughput, this work discusses a mixed precision approach which can counter the limited memory bandwidth issue within the CNN. The obtained results are competitive against other FPGA based implementations proposed in literature. The proposed accelerator can achieve more than 1.9× higher energy efficiency compared to an embedded Nvidia Jetson TX1 implementation of VGG-16.
AB - Convolutional Neural Networks (CNNs) require highly parallel Hardware (HW) accelerators in the form of Graphical Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to build low latency solutions necessary for implementing image processing applications. FPGAs have the ability to provide a right balance between flexibility, performance and energy efficiency. The design of FPGA based accelerator design traditionally required a tedious Register Transfer Level (RTL) design flow process. To improve design productivity, the proposed work uses High-Level Synthesis (HLS), described in OpenCL, to generate the FPGA bitstream for the CNN model. The 2D Winograd transformation is integrated in the pipeline to reduce the overall number of Multiply and Accumulate (MAC) operations in the CNN. Instead of increasing the batch size to improve the throughput, this work discusses a mixed precision approach which can counter the limited memory bandwidth issue within the CNN. The obtained results are competitive against other FPGA based implementations proposed in literature. The proposed accelerator can achieve more than 1.9× higher energy efficiency compared to an embedded Nvidia Jetson TX1 implementation of VGG-16.
KW - CNN
KW - FPGA
KW - HLS
KW - Quantization
KW - Winograd transform
UR - http://www.scopus.com/inward/record.url?scp=85065871356&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-18656-2_18
DO - 10.1007/978-3-030-18656-2_18
M3 - Conference contribution
AN - SCOPUS:85065871356
SN - 9783030186555
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 236
EP - 249
BT - Architecture of Computing Systems - ARCS 2019 - 32nd International Conference, Proceedings
A2 - Schoeberl, Martin
A2 - Pionteck, Thilo
A2 - Brehm, Jürgen
A2 - Hochberger, Christian
A2 - Uhrig, Sascha
PB - Springer Verlag
T2 - 32nd International Conference on Architecture of Computing Systems, ARCS 2019
Y2 - 20 May 2019 through 23 May 2019
ER -