EXplainable Artificial Intelligence (XAI)

While machine/deep learning approaches have undeniable advantages, the more the model is complex the more having explanations is mandatory to build trust on the outcomes and interpret the results, especially in medicine, healthcare and neuroscience fields. Such complexity leads to questions of trust, bias, and interpretability as machine/deep learning methods are often a ”black-box”. XAI was born to make the model behaviour comprehensible from humans, aiming at explaining how the model reached a specific outcome, how the features contributed, and to what extent the model is confident about the decision (uncertainty).


E. Meijering, V. D. Calhoun, Menegaz G. , D. J. Miller and J. C. Ye
"Deep Learning in Biological Image and Signal Processing [From the Guest Editors],"
in IEEE Signal Processing Magazine, vol. 39, no. 2, pp. 24-26, March 2022, doi: 10.1109/MSP.2021.3134525.

Yang G., Rao A., Fernandez-Maloigne C., Calhoun V., Menegaz G.
"Explainable AI (XAI) In Biomedical Signal and Image Processing: Promises and Challenges."
2022 IEEE International Conference on Image Processing (ICIP), doi: 10.1109/ICIP46576.2022.9897629

Cruciani F., Brusini L., Zucchelli M., Retuci Pinheiro G., Setti F., Boscolo Galazzo I., Deriche R., Rittner L., Calabrese M. and Menegaz G.
“Interpretable deep learning as a means for decrypting disease signature in multiple sclerosis.”
Journal of Neural Engineering, Volume 18, Number 4

Salih A., Boscolo Galazzo I., Raisi-Estabragh Z., Petersen S. E., Gkontra P., Lekadir K., Menegaz G. and Radeva P.
“A new scheme for the assessment of the robustness of Explainable Methods Applied to Brain Age estimation.”
2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS)