首页 文献索引 SCI期刊 AI助手
登录 注册
首页 正文

An international journal on information fusion. 2022 Jan:77:29-52. doi: 10.1016/j.inffus.2021.07.016 Q114.72024

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

通过多模式和多中心数据融合为医学可解释的人工智能打开黑匣子:一个迷你回顾,两个展示及其他 翻译改进

Guang Yang  1  2  3, Qinghao Ye  4  5, Jun Xia  6

作者单位 +展开

作者单位

  • 1 National Heart and Lung Institute, Imperial College London, London, UK.
  • 2 Royal Brompton Hospital, London, UK.
  • 3 Imperial Institute of Advanced Technology, Hangzhou, China.
  • 4 Hangzhou Ocean's Smart Boya Co., Ltd, China.
  • 5 University of California, San Diego, La Jolla, CA, USA.
  • 6 Radiology Department, Shenzhen Second People's Hospital, Shenzhen, China.
  • DOI: 10.1016/j.inffus.2021.07.016 PMID: 34980946

    摘要 翻译

    Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.

    Keywords: Explainable AI; Information fusion; Medical image analysis; Multi-domain information fusion; Weakly supervised learning.

    Copyright © An international journal on information fusion. 中文内容为AI机器翻译,仅供参考!

    期刊名:Information fusion

    缩写:INFORM FUSION

    ISSN:1566-2535

    e-ISSN:1872-6305

    IF/分区:14.7/Q1

    文章目录 更多期刊信息

    全文链接
    引文链接
    复制
    已复制!
    推荐内容
    Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond