From Black Box to Clear Insights: Explainable AI in Malware Detection with MalConv2 — Adrien Laigle, Anthony Chesneau, Marie Salmon
Date : 06 juin 2025 à 14:45 — 30 min.
Artificial Intelligence (AI) is transforming cybersecurity, especially in malware detection and classification. Despite their advancements, AI models are frequently perceived as ”black boxes” due to their complexity. The growing field of Explainable AI (XAI) addresses this by making AI models more understandable and compliant with evolving regulations. This work focuses on the application of XAI techniques to the MalConv2 model, a well-established tool in malware detection. By revealing critical insights into the operation of the MalConv2 model, this research enhances transparency and offers valuable details about its functionality. To ease such analysis, we release an IDA plugin which runs MalConv2 on a file and extracts its important offsets and ranked functions. The provided insights clarify the model’s decision-making process, empowering cybersecurity analysts to assess and mitigate threats more effectively. This work underscores the significance of transparency in AI systems, illustrating how improved understanding can enhance AI-driven cybersecurity efforts.