A review of Explainable Artificial Intelligence in healthcare
| dc.authorid | Moosaei, Hossein/0000-0002-0640-2161 | |
| dc.authorid | Hladik, Milan/0000-0002-7340-8491 | |
| dc.authorid | Alizadehsani, Roohallah/0000-0003-0898-5054 | |
| dc.contributor.author | Sadeghi, Zahra | |
| dc.contributor.author | Alizadehsani, Roohallah | |
| dc.contributor.author | Cifci, Mehmet Akif | |
| dc.contributor.author | Kausar, Samina | |
| dc.contributor.author | Rehman, Rizwan | |
| dc.contributor.author | Mahanta, Priyakshi | |
| dc.contributor.author | Bora, Pranjal Kumar | |
| dc.date.accessioned | 2025-07-03T21:26:40Z | |
| dc.date.issued | 2024 | |
| dc.department | Balıkesir Üniversitesi | |
| dc.description.abstract | Explainable Artificial Intelligence (XAI) encompasses the strategies and methodologies used in constructing AI systems that enable end -users to comprehend and interpret the outputs and predictions made by AI models. The increasing deployment of opaque AI applications in highstakes fields, particularly healthcare, has amplified the need for clarity and explainability. This stems from the potential high -impact consequences of erroneous AI predictions in such critical sectors. The effective integration of AI models in healthcare hinges on the capacity of these models to be both explainable and interpretable. Gaining the trust of healthcare professionals necessitates AI applications to be transparent about their decision -making processes and underlying logic. Our paper conducts a systematic review of the various facets and challenges of XAI within the healthcare realm. It aims to dissect a range of XAI methodologies and their applications in healthcare, categorizing them into six distinct groups: feature -oriented methods, global methods, concept models, surrogate models, local pixel -based methods, and human -centric approaches. Specifically, this study focuses on the significance of XAI in addressing healthcarerelated challenges, underscoring its vital role in safety -critical scenarios. Our objective is to provide an exhaustive exploration of XAI's applications in healthcare, alongside an analysis of relevant experimental outcomes, thereby fostering a holistic understanding of XAI's role and potential in this critical domain. | |
| dc.description.sponsorship | Czech Science Foundation [22-11117S, 22-19353S] | |
| dc.description.sponsorship | The work of H. Moosaei was supported by the Czech Science Foundation Grant 22-19353S. Milan Hladik was supported by the Czech Science Foundation Grant 22-11117S. The work of P.M. Pardalos PM was conducted within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) . | |
| dc.identifier.doi | 10.1016/j.compeleceng.2024.109370 | |
| dc.identifier.issn | 0045-7906 | |
| dc.identifier.issn | 1879-0755 | |
| dc.identifier.scopusquality | Q1 | |
| dc.identifier.uri | https://doi.org/10.1016/j.compeleceng.2024.109370 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.12462/21847 | |
| dc.identifier.volume | 118 | |
| dc.identifier.wos | WOS:001252113300001 | |
| dc.identifier.wosquality | Q1 | |
| dc.indekslendigikaynak | Web of Science | |
| dc.language.iso | en | |
| dc.publisher | Pergamon-Elsevier Science Ltd | |
| dc.relation.ispartof | Computers & Electrical Engineering | |
| dc.relation.publicationcategory | Diğer | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.snmz | KA_WOS_20250703 | |
| dc.subject | Explainable AI | |
| dc.subject | Transparent AI | |
| dc.subject | Interpretability | |
| dc.subject | Healthcare | |
| dc.title | A review of Explainable Artificial Intelligence in healthcare | |
| dc.type | Review Article |












