Your Mind is Mine: How to Automatically Steal DL Models From Android Apps — Marie Paindavoine, Maxence Despres, Mohamed Sabt
Date : 08 June 2023 à 15:00 — 30 min.
Ubiquitous Deep Learning (DL for short) apps perform different tasks in various areas, including finance, face recognition, and beauty. Model protection is a critical challenge for the ecosystem of DL apps. Without adequate protection, on-device models can be easily stolen by competitors or maliciously modified by attackers. Due to their importance, leaking models might have both dire financial and security consequences. In this paper, we explore models protection, and show that it is mostly absent or too weak. To this end, we develop and open-source ModelHunter, a tool that automatically analyzes and finds DL models within Android apps. In our experiments, we find that proprietary frameworks are increasingly used. In addition, 10% of the models are protected using encryption. These findings are both concerning and encouraging. Indeed, the trend of models protection is rising, compared to what it was found by previous studies. However, protection mechanisms are still brittle, and rely mainly on mere encryption that can be easily bypassed, or the close nature of proprietary frameworks that can be easily reverse-engineered, since little or no obfuscation is applied. We timely reported our findings to the concerned parties.
Github repositery : https://github.com/Skyld-Labs/ModelHunter
Demo : https://vimeo.com/795252693