TY - GEN
T1 - Training Large Language Models for System-Level Test Program Generation Targeting Non-functional Properties
AU - Schwachhofer, Denis
AU - Domanski, Peter
AU - Becker, Steffen
AU - Wagner, Stefan
AU - Sauer, Matthias
AU - Pfluger, Dirk
AU - Polian, Ilia
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - System-Level Test (SLT) has been an integral part of integrated circuit test flows for over a decade and continues to be significant. Nevertheless, there is a lack of systematic approaches for generating test programs, specifically focusing on the non-functional aspects of the Device under Test (DUT). Currently, test engineers manually create test suites using commercially available software to simulate the end-user environment of the DUT. This process is challenging and laborious and does not assure adequate control over non-functional properties. This paper proposes to use Large Language Models (LLMs) for SLT program generation. We use a pre-trained LLM and fine-tune it to generate test programs that optimize non-functional properties of the DUT, e.g., instructions per cycle. Therefore, we use Gem5, a microarchitectural simulator, in conjunction with Reinforcement Learning-based training. Finally, we write a prompt to generate C code snippets that maximize the instructions per cycle of the given architecture. In addition, we apply hyperparameter optimization to achieve the best possible results in inference.
AB - System-Level Test (SLT) has been an integral part of integrated circuit test flows for over a decade and continues to be significant. Nevertheless, there is a lack of systematic approaches for generating test programs, specifically focusing on the non-functional aspects of the Device under Test (DUT). Currently, test engineers manually create test suites using commercially available software to simulate the end-user environment of the DUT. This process is challenging and laborious and does not assure adequate control over non-functional properties. This paper proposes to use Large Language Models (LLMs) for SLT program generation. We use a pre-trained LLM and fine-tune it to generate test programs that optimize non-functional properties of the DUT, e.g., instructions per cycle. Therefore, we use Gem5, a microarchitectural simulator, in conjunction with Reinforcement Learning-based training. Finally, we write a prompt to generate C code snippets that maximize the instructions per cycle of the given architecture. In addition, we apply hyperparameter optimization to achieve the best possible results in inference.
KW - Functional Test
KW - Large Language Models
KW - Optimization
KW - System-Level Test
KW - Test Generation
UR - http://www.scopus.com/inward/record.url?scp=85197479921&partnerID=8YFLogxK
U2 - 10.1109/ETS61313.2024.10567741
DO - 10.1109/ETS61313.2024.10567741
M3 - Conference contribution
AN - SCOPUS:85197479921
T3 - Proceedings of the European Test Workshop
BT - Proceedings - 2024 29th IEEE European Test Symposium, ETS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 29th IEEE European Test Symposium, ETS 2024
Y2 - 20 May 2024 through 24 May 2024
ER -