Comparative analysis of interpretable artificial intelligence methods

Main Article Content

DOI

Sustainable Development Goals (SDG)

  • Quality education
  • Industry, Innovation, Technology and Infrastructure
Aleksandra Kuszewska

s95466@pollub.edu.pl

https://orcid.org/0009-0004-3781-4267
Małgorzata Charytanowicz

m.charytanowicz@pollub.pl

https://orcid.org/0000-0002-1956-3941

Abstract


The aim of this article is to analyze and compare methods for explaining the results of artificial intelligence methods. Three methods were analyzed: Grad-CAM, SHAP, and LIME, evaluated in terms of their effectiveness on different data types. The analysis used five datasets: Iris, Wine Quality, Brain Tumor Dataset, PHCD, and WheatGrain. Two datasets are tabular, two are image, and one is mixed. SHAP and LIME were applied to tabular datasets, while all three methods were used for image data. Grad-CAM proved the fastest and most effective in locating key regions, while SHAP was slower but more accurate in pixel attribution, and LIME achieved the lowest precision. For tabular data, SHAP provided more accurate and consistent explanations than LIME, especially for high-dimensional datasets.


Keywords:

Explainable Artificial Intelligence (XAI), Grad-CAM, SHAP, LIME, deep learning interpretability, visual explanations

References

Article Details

Kuszewska, A., & Charytanowicz, M. (2026). Comparative analysis of interpretable artificial intelligence methods. Journal of Computer Sciences Institute, 38, 51–58. https://doi.org/10.35784/jcsi.8438