TY - GEN
T1 - Robustness of on-Device Models
T2 - 43rd IEEE/ACM International Conference on Software Engineering: Software Engineering in Practice, ICSE-SEIP 2021
AU - Huang, Yujin
AU - Hu, Han
AU - Chen, Chunyang
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/5
Y1 - 2021/5
N2 - Deep learning has shown its power in many applications, including object detection in images, natural-language understanding, and speech recognition. To make it more accessible to end users, many deep learning models are now embedded in mobile apps. Compared to offloading deep learning from smartphones to the cloud, performing machine learning on-device can help improve latency, connectivity, and power consumption. However, most deep learning models within Android apps can easily be obtained via mature reverse engineering, while the models' exposure may invite adversarial attacks. In this study, we propose a simple but effective approach to hacking deep learning models using adversarial attacks by identifying highly similar pre-trained models from TensorFlow Hub. All 10 real-world Android apps in the experiment are successfully attacked by our approach. Apart from the feasibility of the model attack, we also carry out an empirical study that investigates the characteristics of deep learning models used by hundreds of Android apps on Google Play. The results show that many of them are similar to each other and widely use fine-tuning techniques to pre-trained models on the Internet.
AB - Deep learning has shown its power in many applications, including object detection in images, natural-language understanding, and speech recognition. To make it more accessible to end users, many deep learning models are now embedded in mobile apps. Compared to offloading deep learning from smartphones to the cloud, performing machine learning on-device can help improve latency, connectivity, and power consumption. However, most deep learning models within Android apps can easily be obtained via mature reverse engineering, while the models' exposure may invite adversarial attacks. In this study, we propose a simple but effective approach to hacking deep learning models using adversarial attacks by identifying highly similar pre-trained models from TensorFlow Hub. All 10 real-world Android apps in the experiment are successfully attacked by our approach. Apart from the feasibility of the model attack, we also carry out an empirical study that investigates the characteristics of deep learning models used by hundreds of Android apps on Google Play. The results show that many of them are similar to each other and widely use fine-tuning techniques to pre-trained models on the Internet.
KW - Adversarial attack
KW - Android
KW - Deep learning
KW - Mobile apps
KW - Security
UR - http://www.scopus.com/inward/record.url?scp=85111207102&partnerID=8YFLogxK
U2 - 10.1109/ICSE-SEIP52600.2021.00019
DO - 10.1109/ICSE-SEIP52600.2021.00019
M3 - Conference contribution
AN - SCOPUS:85111207102
T3 - Proceedings - International Conference on Software Engineering
SP - 101
EP - 110
BT - Proceedings - 2021 IEEE/ACM 43rd International Conference on Software Engineering
PB - IEEE Computer Society
Y2 - 25 May 2021 through 28 May 2021
ER -