DynaMO: Protecting Mobile DL Models through Coupling Obfuscated DL Operators

Mingyi Zhou, Xiang Gao, Xiao Chen, Chunyang Chen, John Grundy, Li Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deploying deep learning (DL) models on mobile applications (Apps) has become ever-more popular. However, existing studies show attackers can easily reverse-engineer mobile DL models in Apps to steal intellectual property or generate effective attacks. A recent approach, Model Obfuscation, has been proposed to defend against such reverse engineering by obfuscating DL model representations, such as weights and computational graphs, without affecting model performance. These existing model obfuscation methods use static methods to obfuscate the model representation, or they use half-dynamic methods but require users to restore the model information through additional input arguments. However, these static methods or half-dynamic methods cannot provide enough protection for on-device DL models. Attackers can use dynamic analysis to mine the sensitive information in the inference codes as the correct model information and intermediate results must be recovered at runtime for static and half-dynamic obfuscation methods. We assess the vulnerability of the existing obfuscation strategies using an instrumentation method and tool, DLModelExplorer, that dynamically extracts correct sensitive model information (i.e., weights, computational graph) at runtime. Experiments show it achieves very high attack performance (e.g., 98.76% of weights extraction rate and 99.89% of obfuscating operator classification rate). To defend against such attacks based on dynamic instrumentation, we propose DynaMO, a Dynamic Model Obfuscation strategy similar to Homomorphic Encryption. The obfuscation and recovery process can be done through simple linear transformation for the weights of randomly coupled eligible operators, which is a fully dynamic obfuscation strategy. Experiments show that our proposed strategy can dramatically improve model security compared with the existing obfuscation strategies, with only negligible overheads for on-device models. Our prototype tool is publicly available at https://github.com/zhoumingyi/DynaMO.

Original languageEnglish
Title of host publicationProceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024
PublisherAssociation for Computing Machinery, Inc
Pages204-215
Number of pages12
ISBN (Electronic)9798400712487
DOIs
StatePublished - 27 Oct 2024
Event39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024 - Sacramento, United States
Duration: 28 Oct 20241 Nov 2024

Publication series

NameProceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024

Conference

Conference39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024
Country/TerritoryUnited States
CitySacramento
Period28/10/241/11/24

Keywords

  • AI safety
  • on-device AI
  • SE for AI

Fingerprint

Dive into the research topics of 'DynaMO: Protecting Mobile DL Models through Coupling Obfuscated DL Operators'. Together they form a unique fingerprint.

Cite this