Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models

Mingyi Zhou, Xiang Gao, Pei Liu, John Grundy, Chunyang Chen, Xiao Chen, Li Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Recent studies show that on-device deployed deep learning (DL) models, such as those of Tensor Flow Lite (TFLite), can be easily extracted from real-world applications and devices by attackers to generate many kinds of adversarial and other attacks. Although securing deployed on-device DL models has gained increasing attention, no existing methods can fully prevent these attacks. Traditional software protection techniques have been widely explored. If on-device models can be implemented using pure code, such as C++, it will open the possibility of reusing existing robust software protection techniques. However, due to the complexity of DL models, there is no automatic method that can translate DL models to pure code. To fill this gap, we propose a novel method, CustomDLCoder, to automatically extract on-device DL model information and synthesize a customized executable program for a wide range of DL models. CustomDLCoder first parses the DL model, extracts its backend computing codes, configures the extracted codes, and then generates a customized program to implement and deploy the DL model without explicit model representation. The synthesized program hides model information for DL deployment environments since it does not need to retain explicit model representation, preventing many attacks on the DL model. In addition, it improves ML performance because the customized code removes model parsing and preprocessing steps and only retains the data computing process. Our experimental results show that CustomDLCoder improves model security by disabling on-device model sniffing. Compared with the original on-device platform (i.e., TFLite), our method can accelerate model inference by 21.0% and 24.3% on x86-64 and ARM64 platforms, respectively. Most importantly, it can significantly reduce memory consumption by 68.8% and 36.0% on x86-64 and ARM64 platforms, respectively.

Original languageEnglish
Title of host publicationISSTA 2024 - Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis
EditorsMaria Christakis, Michael Pradel
PublisherAssociation for Computing Machinery, Inc
Pages174-185
Number of pages12
ISBN (Electronic)9798400706127
DOIs
StatePublished - 11 Sep 2024
Event33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2024 - Vienna, Austria
Duration: 16 Sep 202420 Sep 2024

Publication series

NameISSTA 2024 - Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis

Conference

Conference33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2024
Country/TerritoryAustria
CityVienna
Period16/09/2420/09/24

Keywords

  • AI safety
  • SE for AI
  • software optimization for AI deployment

Fingerprint

Dive into the research topics of 'Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models'. Together they form a unique fingerprint.

Cite this