Should AI models be explainable to clinicians? - Hypertension pulmonaire : physiopathologie et innovation thérapeutique
Article Dans Une Revue Critical Care Année : 2024

Should AI models be explainable to clinicians?

Résumé

Abstract In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. “Explainable AI” (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
Fichier principal
Vignette du fichier
s13054-024-05005-y.pdf (917.21 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04701022 , version 1 (18-09-2024)

Identifiants

Citer

Gwénolé Abgrall, Andre L Holder, Zaineb Chelly Dagdia, Karine Zeitouni, Xavier Monnet. Should AI models be explainable to clinicians?. Critical Care, 2024, 28 (1), pp.301. ⟨10.1186/s13054-024-05005-y⟩. ⟨hal-04701022⟩
47 Consultations
41 Téléchargements

Altmetric

Partager

More