Enhancing interpretability in brain tumor detection: Leveraging Grad-CAM and SHAP for explainable AI in MRI-based cancer diagnosis

Main Article Content

DOI

Nasr GHARAIBEH

nas@bau.edu.jo

Abstract

This study aims to improve the interpretability of brain tumour detection by using explainable AI techniques, namely Grad-CAM and SHAP, alongside an Xception-based convolutional neural network (CNN). The model classifies brain MRI images into four categories — glioma, meningioma, pituitary tumour and non-tumour — ensuring transparency and reliability for potential clinical applications. An Xception-based CNN was trained using a labelled dataset of brain MRI images. Grad-CAM then provided region-based visual explanations by highlighting the areas of the MRI scans that were most important for tumour classification. SHAP quantified feature importance, offering a detailed understanding of model decisions. These complementary methods enhance model transparency and address potential biases. The model achieved accuracies of 99.95%, 99.08%, and 98.78% on the training, validation, and test sets, respectively. Grad-CAM effectively identified regions that were significant for different tumour types, while SHAP analysis provided insights into the importance of individual features. Together, these approaches confirmed the reliability and interpretability of the model, overcoming key challenges in AI-driven medical diagnostics. Integrating Grad-CAM and SHAP with a high-performing CNN model enhances the interpretability and trustworthiness of brain tumour detection systems. The findings underscore the potential of explainable AI to improve diagnostic accuracy and encourage the adoption of AI technologies in clinical practice.

Keywords:

Brain MRI classification, Xception architecture, Explainable AI, Grad-CAM, SHAP interpretability applications

References

Article Details

GHARAIBEH, N. (2025). Enhancing interpretability in brain tumor detection: Leveraging Grad-CAM and SHAP for explainable AI in MRI-based cancer diagnosis. Applied Computer Science, 21(3), 182–197. https://doi.org/10.35784/acs_7375