Can AI be truly trustworthy in healthcare? This article positions explainability as a key factor in ethically sound medical decision-making when using AI, emphasizing the need to balance practical explanations with thorough validation of AI decision-support systems in real-world clinical settings. The study focuses on the intersection of patient care and AI implementation. It defines post hoc medical explainability as practical, non-exhaustive explanations that facilitate shared decision-making between physicians and patients within specific clinical contexts. The research acknowledges the inherent tension between the rush to deploy AI and the necessity for comprehensive validation. The study argues that combining validated AI systems with post hoc explanations can satisfy the explanatory needs of both physicians and patients. This approach can aid in integrating a retrospectively analyzed and prospectively validated AI system, ultimately promoting transparency and trust in AI-supported medical decisions.
Published in Discover Artificial Intelligence, this article aligns perfectly with the journal's scope by exploring the ethical implications and practical applications of AI in healthcare. By examining the role of explainability in AI-supported medical decision-making, it contributes to the journal's focus on the intersection of AI and various domains.