TY - JOUR
T1 - A Masked Hardware Accelerator for Feed-Forward Neural Networks With Fixed-Point Arithmetic
AU - Brosch, Manuel
AU - Probst, Matthias
AU - Glaser, Matthias
AU - Sigl, Georg
N1 - Publisher Copyright:
© 1993-2012 IEEE.
PY - 2024/2/1
Y1 - 2024/2/1
N2 - Neural network (NN) execution on resource-constrained edge devices is increasing. Commonly, hardware accelerators are introduced in small devices to support the execution of NNs. However, an attacker can often gain physical access to edge devices. Therefore, side-channel attacks are a potential threat to obtain valuable information about the NN. In order to keep the network secret and protect it from extraction, countermeasures are required. In this article, we propose a masked hardware accelerator for feed-forward NNs that utilizes fixed-point arithmetic and is protected against side-channel analysis (SCA). We use an existing arithmetic masking scheme and improve it to prevent incorrect results. Moreover, we transfer the scheme to the hardware layer by utilizing the glitch-extended probing model and demonstrate the security of the individual modules. To exhibit the effectiveness of the masked design, we implement it on an FPGA and measure the power consumption. The results show that with two million measurements, no secret information is leaked by means of a t -test. In addition, we compare our accelerator with the masked software implementation and other hardware designs. The comparison indicates that our accelerator is up to 38 times faster than software and improves the throughput by a factor of about 4.1 compared to other masked hardware accelerators.
AB - Neural network (NN) execution on resource-constrained edge devices is increasing. Commonly, hardware accelerators are introduced in small devices to support the execution of NNs. However, an attacker can often gain physical access to edge devices. Therefore, side-channel attacks are a potential threat to obtain valuable information about the NN. In order to keep the network secret and protect it from extraction, countermeasures are required. In this article, we propose a masked hardware accelerator for feed-forward NNs that utilizes fixed-point arithmetic and is protected against side-channel analysis (SCA). We use an existing arithmetic masking scheme and improve it to prevent incorrect results. Moreover, we transfer the scheme to the hardware layer by utilizing the glitch-extended probing model and demonstrate the security of the individual modules. To exhibit the effectiveness of the masked design, we implement it on an FPGA and measure the power consumption. The results show that with two million measurements, no secret information is leaked by means of a t -test. In addition, we compare our accelerator with the masked software implementation and other hardware designs. The comparison indicates that our accelerator is up to 38 times faster than software and improves the throughput by a factor of about 4.1 compared to other masked hardware accelerators.
KW - Countermeasure
KW - hardware
KW - masking
KW - neural network (NN) accelerator
KW - side-channel analysis (SCA)
UR - http://www.scopus.com/inward/record.url?scp=85180311776&partnerID=8YFLogxK
U2 - 10.1109/TVLSI.2023.3340553
DO - 10.1109/TVLSI.2023.3340553
M3 - Article
AN - SCOPUS:85180311776
SN - 1063-8210
VL - 32
SP - 231
EP - 244
JO - IEEE Transactions on Very Large Scale Integration (VLSI) Systems
JF - IEEE Transactions on Very Large Scale Integration (VLSI) Systems
IS - 2
ER -