Blockchain-Enabled Explainable AI: A Framework for Verifiable and Trustworthy Machine Learning Interpretability

Authors

  • Irfan Muhammad Coventry Business School, Faculty of Business and Law, Coventry University, Coventry, CV1 5FB, United Kingdom. Author
  • Muhammad Tahir Department of Business Management, TIMES Institute Multan, Multan, 60000, Pakistan Author

Keywords:

Explainable AI (XAI); Blockchain Technology; Trustworthy Machine Learning; Interpretability Verification; Smart Contracts; Decentralized Validation; Cryptographic Attestation; Immutable Ledger

Abstract

The increasing reliance on artificial intelligence (AI) and machine learning (ML) in critical decision-making domains such as healthcare, finance, banking, and autonomous systems has underscored the need for transparency, interpretability, and trustworthiness in AI models. While Explainable AI (XAI) techniques have made significant strides in providing human-understandable explanations for model predictions, a critical gap remains in ensuring that these explanations are verifiable, tamper-proof, and auditable. This paper introduces a novel framework that integrates blockchain technology with XAI to enhance the trustworthiness of machine learning interpretability. By leveraging blockchain’s inherent properties—immutability, decentralization, and cryptographic security—we propose a system where model explanations are securely recorded, validated, and audited in a transparent and tamper-resistant manner [1-3].

Our framework, Blockchain-Enabled Explainable AI (BE-XAI), employs smart contracts for automated logging of explanations, decentralized consensus mechanisms for validation, and cryptographic attestation to ensure the authenticity of interpretability results. We conduct extensive experiments on benchmark datasets, including UCI Adult Income, MNIST, and IMDB Sentiment Analysis, using diverse ML models such as Random Forest, CNN, and BERT, alongside popular XAI methods like SHAP, LIME, and Integrated Gradients. The results demonstrate that BE-XAI successfully preserves explanation integrity, mitigates risks of post-hoc manipulation, and provides a robust mechanism for auditability. The implications of this work are far-reaching, particularly in high-stakes applications where accountability and regulatory compliance are paramount

Downloads

Download data is not yet available.

Downloads

Published

2025-05-14