A review of Explainable Artificial Intelligence in healthcare

dc.authoridMoosaei, Hossein/0000-0002-0640-2161
dc.authoridHladik, Milan/0000-0002-7340-8491
dc.authoridAlizadehsani, Roohallah/0000-0003-0898-5054
dc.contributor.authorSadeghi, Zahra
dc.contributor.authorAlizadehsani, Roohallah
dc.contributor.authorCifci, Mehmet Akif
dc.contributor.authorKausar, Samina
dc.contributor.authorRehman, Rizwan
dc.contributor.authorMahanta, Priyakshi
dc.contributor.authorBora, Pranjal Kumar
dc.date.accessioned2025-07-03T21:26:40Z
dc.date.issued2024
dc.departmentBalıkesir Üniversitesi
dc.description.abstractExplainable Artificial Intelligence (XAI) encompasses the strategies and methodologies used in constructing AI systems that enable end -users to comprehend and interpret the outputs and predictions made by AI models. The increasing deployment of opaque AI applications in highstakes fields, particularly healthcare, has amplified the need for clarity and explainability. This stems from the potential high -impact consequences of erroneous AI predictions in such critical sectors. The effective integration of AI models in healthcare hinges on the capacity of these models to be both explainable and interpretable. Gaining the trust of healthcare professionals necessitates AI applications to be transparent about their decision -making processes and underlying logic. Our paper conducts a systematic review of the various facets and challenges of XAI within the healthcare realm. It aims to dissect a range of XAI methodologies and their applications in healthcare, categorizing them into six distinct groups: feature -oriented methods, global methods, concept models, surrogate models, local pixel -based methods, and human -centric approaches. Specifically, this study focuses on the significance of XAI in addressing healthcarerelated challenges, underscoring its vital role in safety -critical scenarios. Our objective is to provide an exhaustive exploration of XAI's applications in healthcare, alongside an analysis of relevant experimental outcomes, thereby fostering a holistic understanding of XAI's role and potential in this critical domain.
dc.description.sponsorshipCzech Science Foundation [22-11117S, 22-19353S]
dc.description.sponsorshipThe work of H. Moosaei was supported by the Czech Science Foundation Grant 22-19353S. Milan Hladik was supported by the Czech Science Foundation Grant 22-11117S. The work of P.M. Pardalos PM was conducted within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) .
dc.identifier.doi10.1016/j.compeleceng.2024.109370
dc.identifier.issn0045-7906
dc.identifier.issn1879-0755
dc.identifier.scopusqualityQ1
dc.identifier.urihttps://doi.org/10.1016/j.compeleceng.2024.109370
dc.identifier.urihttps://hdl.handle.net/20.500.12462/21847
dc.identifier.volume118
dc.identifier.wosWOS:001252113300001
dc.identifier.wosqualityQ1
dc.indekslendigikaynakWeb of Science
dc.language.isoen
dc.publisherPergamon-Elsevier Science Ltd
dc.relation.ispartofComputers & Electrical Engineering
dc.relation.publicationcategoryDiğer
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_WOS_20250703
dc.subjectExplainable AI
dc.subjectTransparent AI
dc.subjectInterpretability
dc.subjectHealthcare
dc.titleA review of Explainable Artificial Intelligence in healthcare
dc.typeReview Article

Dosyalar