Enhancing interpretability in brain tumor detection: Leveraging Grad-CAM and SHAP for explainable AI in MRI-based cancer diagnosis
Article Sidebar
Open full text
Issue Vol. 21 No. 3 (2025)
-
Taming complexity: Generative doppelgangers for stochastic data trends in complex industrial manufacturing systems
Richard NASSO TOUMBA, Maxime MOAMISSOAL SAMUEL, Achille EBOKE, Boniface ONDO, Timothée KOMBE1-22
-
Kidney disease diagnosis based on artificial intelligence/deep learning techniques
Abeer ALSHIHA, Abdalrahman QUBAA23-37
-
Pulmonary diseases identification: Deep learning models and ensemble learning
Patrycja KWAŚNIEWSKA, Grzegorz ZIELIŃSKI, Paweł POWROŹNIK, Maria SKUBLEWSKA-PASZKOWSKA38-58
-
A machine learning approach for evaluating drop impact reliability of solder joints in BGA packaging
Venkata Naga Chandana YANAMURTHY, Venu Kumar NATHI59-71
-
Prediction of remaining useful life and downtime of induction motors with supervised machine learning
Muhammad Dzulfiqar ANINDHITO, SUHARJITO72-86
-
An ensemble model for maternal health risk classification in Delta State, Nigeria
Oghenevabaire EFEVBERHA-OGODO, Francisca A. EGBOKHARE, Fidelis O. CHETE87-98
-
Transforming ERP interfaces in production environments: An empirical evaluation using the User Experience Questionnaire
Anna HAMERA99-116
-
Systematic drift characterization in differential wheeled robot using external VR tracking: Effects of route complexity and motion dynamics
Stanisław Piotr SKULIMOWSKI, Szymon RYBKA, Bartosz TATARA, Michał Dawid WELMAN117-136
-
Wireless body area networks: A review of challenges, architecture, applications, technologies and interference mitigation for next-generation healthcare
Akeel Abdulraheem THULNOON, Ahmed Mahdi JUBAIR, Foad Salem MUBAREK, Senan Ali ABD137-161
-
Fuzzy logic in arrhythmia detection: A systematic review of techniques, applications, and clinical interpretability
Nadjem Eddine MENACEUR, Sofia KOUAH, Derdour MEKHLOUF, Khaled OUANES, Meryam AMMI162-181
-
Enhancing interpretability in brain tumor detection: Leveraging Grad-CAM and SHAP for explainable AI in MRI-based cancer diagnosis
Nasr GHARAIBEH182-197
-
Noise source analysis of the nitrogen generation system
Grzegorz BARAŃSKI198-209
Archives
-
Vol. 21 No. 3
2025-10-05 12
-
Vol. 21 No. 2
2025-06-27 12
-
Vol. 21 No. 1
2025-03-31 12
-
Vol. 20 No. 4
2025-01-31 12
-
Vol. 20 No. 3
2024-09-30 12
-
Vol. 20 No. 2
2024-08-14 12
-
Vol. 20 No. 1
2024-03-30 12
-
Vol. 19 No. 4
2023-12-31 10
-
Vol. 19 No. 3
2023-09-30 10
-
Vol. 19 No. 2
2023-06-30 10
-
Vol. 19 No. 1
2023-03-31 10
-
Vol. 18 No. 4
2022-12-30 8
-
Vol. 18 No. 3
2022-09-30 8
-
Vol. 18 No. 2
2022-06-30 8
-
Vol. 18 No. 1
2022-03-30 7
-
Vol. 17 No. 4
2021-12-30 8
-
Vol. 17 No. 3
2021-09-30 8
-
Vol. 17 No. 2
2021-06-30 8
-
Vol. 17 No. 1
2021-03-30 8
Main Article Content
DOI
Authors
Abstract
This study aims to improve the interpretability of brain tumour detection by using explainable AI techniques, namely Grad-CAM and SHAP, alongside an Xception-based convolutional neural network (CNN). The model classifies brain MRI images into four categories — glioma, meningioma, pituitary tumour and non-tumour — ensuring transparency and reliability for potential clinical applications. An Xception-based CNN was trained using a labelled dataset of brain MRI images. Grad-CAM then provided region-based visual explanations by highlighting the areas of the MRI scans that were most important for tumour classification. SHAP quantified feature importance, offering a detailed understanding of model decisions. These complementary methods enhance model transparency and address potential biases. The model achieved accuracies of 99.95%, 99.08%, and 98.78% on the training, validation, and test sets, respectively. Grad-CAM effectively identified regions that were significant for different tumour types, while SHAP analysis provided insights into the importance of individual features. Together, these approaches confirmed the reliability and interpretability of the model, overcoming key challenges in AI-driven medical diagnostics. Integrating Grad-CAM and SHAP with a high-performing CNN model enhances the interpretability and trustworthiness of brain tumour detection systems. The findings underscore the potential of explainable AI to improve diagnostic accuracy and encourage the adoption of AI technologies in clinical practice.
Keywords:
References
Abd-Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2019). A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magnetic Resonance Imaging, 61, 300-318. https://doi.org/10.1016/j.mri.2019.05.028 DOI: https://doi.org/10.1016/j.mri.2019.05.028
Abunasser, B. S., Al-Hiealy, M. R. J., Zaqout, I. S., & Abu-Naser, S. S. (2023). Convolution neural network for breast cancer detection and classification using deep learning. Asian Pacific Journal of Cancer Prevention, 24(2), 531-544. https://doi.org/10.31557/APJCP.2023.24.2.531 DOI: https://doi.org/10.31557/APJCP.2023.24.2.531
Ahmed, M., Hossain, M., Islam, R., Ali, S., Nafi, A. A. N., Ahmed, F., Ahmed, K. M., Miah, S., Rahman, M., Niu, M., & Islam, K. (2024). Brain tumor detection and classification in MRI using hybrid ViT and GRU model with explainable AI in Southern Bangladesh. Scientific Reports, 14(1), 22797. https://doi.org/10.1038/s41598-024-71893-3 DOI: https://doi.org/10.1038/s41598-024-71893-3
Ahmed, S., Nobel, S. N., & Ullah, O. (2023, February). An effective deep CNN model for multiclass brain tumor detection using MRI images and shape explainability. 2023 International Conference on Electrical, Computer and Communication Engineering (ECCE) (pp. 1–6). IEEE. https://doi.org/10.1109/ECCE57851.2023.10101503 DOI: https://doi.org/10.1109/ECCE57851.2023.10101503
Awaluddin, B. A., Chao, C. T., & Chiou, J. S. (2023). Investigating effective geometric transformation for image augmentation to improve static hand gestures with a pre-trained convolutional neural network. Mathematics, 11(23), 4783. https://doi.org/10.3390/math11234783 DOI: https://doi.org/10.3390/math11234783
Babu Vimala, B., Srinivasan, S., Mathivanan, S. K., Mahalakshmi, Jayagopal, P., & Dalu, G. T. (2023). Detection and classification of brain tumors using hybrid deep learning models. Scientific Reports, 13, 23029. https://doi.org/10.1038/s41598-023-50505-6 DOI: https://doi.org/10.1038/s41598-023-50505-6
Boitor, O., Stoica, F., Mihăilă, R., Stoica, L. F., & Stef, L. (2023). Automated machine learning to develop predictive models of metabolic syndrome in patients with periodontal disease. Diagnostics, 13(24), 3631. https://doi.org/10.3390/diagnostics13243631 DOI: https://doi.org/10.3390/diagnostics13243631
Ejiyi, C., Qin, Z., Monday, M., Ejiyi, M. B., Ukwuoma, C., Ejiyi, T. U., Agbesi, V. K., Agu, A., & Orakwue, C. (2023). Breast cancer diagnosis and management are guided by data augmentation, utilizing an integrated shape and random augmentation framework. Biofactors, 50(1), 114-134. https://doi.org/10.1002/biof.1995 DOI: https://doi.org/10.1002/biof.1995
Ghassemi, M., Oakden-Rayner, L., & Beam, A. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745-e750. https://doi.org/10.1016/s2589-7500(21)00208-9 DOI: https://doi.org/10.1016/S2589-7500(21)00208-9
Guo, J., & Dou, Q. (2023). The data enhancement method is based on an attention activation map. International Conference on Computer, Artificial Intelligence, and Control Engineering (CAICE 2023) (pp. 424-428). SPIE. https://doi.org/10.1117/12.2681048 DOI: https://doi.org/10.1117/12.2681048
Ishaq, A., Ullah, F., Hamandawana, P., Cho, D. J., & Chung, T. S. (2025). Improved EfficientNet architecture for multi-grade brain tumor detection. Electronics, 14(4), 710. https://doi.org/10.3390/electronics14040710 DOI: https://doi.org/10.3390/electronics14040710
Islam, M. A., Mridha, M. F., Safran, M. S., Alfarhood, S., & Kabir, M. M. (2025). Revolutionizing brain tumor detection using explainable AI in MRI images. NMR in Biomedicine, 38(3), e70001. https://doi.org/10.1002/nbm.70001 DOI: https://doi.org/10.1002/nbm.70001
Jinsakul, N., Tsai, C. F., Tsai, C. E., & Wu, P. (2019). Enhancement of deep learning in image classification performance using exception with the swish activation function for colorectal polyp preliminary screening. Mathematics, 7(12), 1170. https://doi.org/10.3390/math7121170 DOI: https://doi.org/10.3390/math7121170
Khan, M. S. I., Rahman, A., Debnath, T., Karim, M. R., Nasir, M. K., Band, S. S., Mosavi, A., & Dehzangi, I. (2022). Accurate brain tumor detection using deep convolutional neural network. Computational and Structural Biotechnology Journal, 20, 4733-4745. https://doi.org/10.1016/j.csbj.2022.08.039 DOI: https://doi.org/10.1016/j.csbj.2022.08.039
Mandiya, R. E., Kongo, H. M., Kasereka, S. K., Kyandoghere, K., Tshakwanda, P. M., & Kasoro, N. M. (2024). Enhancing COVID-19 detection: An xception-based model with advanced transfer learning from X-ray Thorax images. Journal of Imaging, 10(3), 63. https://doi.org/10.3390/jimaging10030063 DOI: https://doi.org/10.3390/jimaging10030063
Mastoi, Q. U. A., Latif, S., Brohi, S., Ahmad, J., Alqhatani, A., Alshehri, M. S., Al Mazroa, A., & Ullah, R. (2025). Explainable AI in medical imaging: An interpretable and collaborative federated learning model for brain tumor classification. Frontiers in Oncology. 15, 1535478. https://doi.org/10.3389/fonc.2025.1535478 DOI: https://doi.org/10.3389/fonc.2025.1535478
Nazir, I., Akter, A., Wadud, A. H., & Uddin, A. (2024). Utilizing customized CNN for brain tumor prediction with explainable AI. Heliyon, 10(20), e38997. https://doi.org/10.1016/j.heliyon.2024.e38997 DOI: https://doi.org/10.1016/j.heliyon.2024.e38997
Nhlapho, W., Atemkeng, M., Brima, Y., & Ndogmo, J. C. (2024). Bridging the gap: Exploring enterpretability in deep learning models for brain tumor detection and diagnosis from MRI images. Information, 15(4), 182. https://doi.org/10.3390/info15040182 DOI: https://doi.org/10.3390/info15040182
Nickparvar, M. (2024). Brain tumor MRI dataset. Kaggle. Retrieved May 16, 2025 from
https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset
Noreen, N., Palaniappan, S., Qayyum, A., Ahmad, I., Imran, M., & Shoaib, M. (2020). A deep learning model based on a concatenation approach for diagnosing brain tumors. IEEE Access, 8, 55135-55144. https://doi.org/10.1109/ACCESS.2020.2978629 DOI: https://doi.org/10.1109/ACCESS.2020.2978629
Özbay, F. A., & Özbay, E. (2023). Brain tumor detection with mRMR-based multimodal fusion of deep learning from MR images using Grad-CAM. Iran Journal of Computer Science, 6, 245-259. https://doi.org/10.1007/s42044-023-00137-w DOI: https://doi.org/10.1007/s42044-023-00137-w
Ponzi, V., & De Magistris, G. (2023). Exploring brain tumor segmentation and patient survival: An interpretable model approach. Preprint, 1–8.
Qiu, Z., Rivaz, H., & Xiao, Y. (2023). Is visual explanation with Grad-CAM more reliable for deeper neural networks? A case study with automatic pneumothorax diagnosis. ArXiv, abs/2308.15172. https://doi.org/10.48550/arXiv.2308.15172 DOI: https://doi.org/10.1007/978-3-031-45676-3_23
Raghunath Mutkule, P., Sable, N. P., Mahalle, P. N., & Shinde, G. R. (2023). Predictive analytics algorithm for early brain tumor prevention using explainable artificial intelligence (XAI): A systematic review of the state-of-the-art. In P. N. Mahalle, G. R. Shinde, & P. M. Joshi (Eds.), IoT and Big Data Analytics (Vol. 4, pp. 69–83). BENTHAM SCIENCE PUBLISHERS. https://doi.org/10.2174/9789815179187123040007 DOI: https://doi.org/10.2174/9789815179187123040007
Rahman, A., Masum, M. I., Hasib, K., Mridha, M., Alfarhood, S., Safran, M. S., & Che, D. (2024). GliomaCNN: An effective lightweight CNN model in assessment of classifying brain tumor from magnetic resonance images using explainable AI. Computer Modeling in Engineering & Sciences, 140(3), 2425-2448. https://doi.org/10.32604/cmes.2024.050760 DOI: https://doi.org/10.32604/cmes.2024.050760
Shamshad, N., Sarwar, D., Almogren, A., Saleem, K., Munawar, A., Rehman, A. U., & Bharany, S. (2024). Enhancing brain tumor classification by a comprehensive study on transfer learning techniques and model efficiency using MRI datasets. IEEE Access, 12, 100407-100418. https://doi.org/10.1109/ACCESS.2024.3430109 DOI: https://doi.org/10.1109/ACCESS.2024.3430109
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(56), 1929-1958.
T. R., M., Gupta, M., T. A., A., Kumar, V. V., Geman, O., & Kumar, D. V. (2024). An XAI-enhanced EfficientNetB0 framework for precision brain tumor detection in MRI imaging. Journal of Neuroscience Methods, 410, 110227. https://doi.org/10.1016/j.jneumeth.2024.110227 DOI: https://doi.org/10.1016/j.jneumeth.2024.110227
Trivedi, U. B., Bhatt, M., & Srivastava, P. (2021). Prevent overfitting problem in machine learning: A case focus on linear and logistics regression. In P. K. Singh, Z. Polkowski, S. Tanwar, S. K. Pandey, G. Matei, & D. Pirvu (Eds.), Innovations in Information and Communication Technologies (IICT-2020) (pp. 345–349). Springer International Publishing. https://doi.org/10.1007/978-3-030-66218-9_40 DOI: https://doi.org/10.1007/978-3-030-66218-9_40
Viswan, V., Shaffi, N., Mahmud, M., Subramanian, K., & Hajamohideen, F. (2023). Explainable artificial intelligence in Alzheimer’s disease classification: a systematic review. Cognitive Computation, 16, 1-44. https://doi.org/10.1007/s12559-023-10192-x DOI: https://doi.org/10.1007/s12559-023-10192-x
Article Details
Abstract views: 251
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in Applied Computer Science are open-access and distributed under the terms of the Creative Commons Attribution 4.0 International License.
