TY - GEN
T1 - Energy and performance prediction of CUDA applications using Dynamic Regression models
AU - Benedict, Shajulin
AU - Rejitha, R. S.
AU - Alex, Suja A.
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/2/18
Y1 - 2016/2/18
N2 - Many emerging supercomputers and future exa-scale computing machines require accelerator-based GPU computing architectures for boosting their computing performances. CUDA is one of the widely applied GPGPU parallel computing platform for those architectures owing to its better performance for certain scientific applications. However, the emerging rise in the development of CUDA applications from various scientific domains, such as, bioinformatics, HEP, and so forth, has urged the need for tools that identify optimal application parameters and the other GPGPU architecture metrics, including work group size, work item, memory utilization, and so forth. In fact, the tuning process might end up with several executions of various possible code variants. This paper proposed Dynamic Regression models, namely, Dynamic Random Forests (DynRFM), Dynamic Support Vector Machines (DynSVM), and Dynamic Linear Regression Models (Dyn LRM) for the energy/performance prediction of the code variants of CUDA applications. The prediction was based on the application parameters and the performance metrics of applications, such as, number of instructions, memory issues, and so forth. In order to obtain energy/performance measurements for CUDA applications, EACudaLib (a monitoring library implemented in EnergyAnalyzer tool) was developed. In addition, the proposed Dynamic Regression models were compared to the classical regression models, such as, RFM, SVM, and LRM. The validation results of the proposed dynamic regression models, when tested with the different problem sizes of Nbody and Particle CUDA simulations, manifested the energy/performance prediction improvement of over 50.26 to 61.23 percentages.
AB - Many emerging supercomputers and future exa-scale computing machines require accelerator-based GPU computing architectures for boosting their computing performances. CUDA is one of the widely applied GPGPU parallel computing platform for those architectures owing to its better performance for certain scientific applications. However, the emerging rise in the development of CUDA applications from various scientific domains, such as, bioinformatics, HEP, and so forth, has urged the need for tools that identify optimal application parameters and the other GPGPU architecture metrics, including work group size, work item, memory utilization, and so forth. In fact, the tuning process might end up with several executions of various possible code variants. This paper proposed Dynamic Regression models, namely, Dynamic Random Forests (DynRFM), Dynamic Support Vector Machines (DynSVM), and Dynamic Linear Regression Models (Dyn LRM) for the energy/performance prediction of the code variants of CUDA applications. The prediction was based on the application parameters and the performance metrics of applications, such as, number of instructions, memory issues, and so forth. In order to obtain energy/performance measurements for CUDA applications, EACudaLib (a monitoring library implemented in EnergyAnalyzer tool) was developed. In addition, the proposed Dynamic Regression models were compared to the classical regression models, such as, RFM, SVM, and LRM. The validation results of the proposed dynamic regression models, when tested with the different problem sizes of Nbody and Particle CUDA simulations, manifested the energy/performance prediction improvement of over 50.26 to 61.23 percentages.
KW - Applications
KW - CUDA
KW - Energy
KW - Performance analysis
KW - Performance tuning
KW - Tools
UR - http://www.scopus.com/inward/record.url?scp=84976626491&partnerID=8YFLogxK
U2 - 10.1145/2856636.2856643
DO - 10.1145/2856636.2856643
M3 - Conference contribution
AN - SCOPUS:84976626491
T3 - ACM International Conference Proceeding Series
SP - 37
EP - 47
BT - iSOFT 2016 - Proceedings of the 9th India Software Engineering Conference, ISEC 2016
PB - Association for Computing Machinery
T2 - 9th India Software Engineering Conference, ISEC 2016
Y2 - 18 February 2016 through 20 February 2016
ER -