Computational medical imaging techniques aim towards enhancing the diagnostic performance of visual assessments in medical imaging, improving the early diagnosis of various diseases, helping to obtain a deeper understanding of physiology and pathology, and therefore contributing to advance the field of Quantitative Radiology. To reach these goals, medical image computing and signal processing are commonly combined with biophysical models, which describe explicitly the organ or tissue under investigation.

Machine Learning (ML) models revolutionised multiple tasks in medical image computing, such as image segmentation, registration and synthesis, through the extensive analysis of big imaging data. Although ML models outperform classic approaches on the above tasks, they remain to a large extend implicit in terms of describing the data under investigation. This limits ML model interpretability, which in turn is one of the main barriers towards ML-based pathology detection assessments and generalised single- or multi-modal ML analysis in medical imaging. Moreover, in modern clinical practices and settings, detailed explanations of the model behaviours are increasingly required to support reliability towards improving clinical decision making. Model explainability becomes also critical when data integration techniques are required to cross-assess learning performance from imaging data against mutual, complementary or “clinical reference standard” information from “additional modalities” (either imaging, or other types of biomedical/clinical data such as invasive methods or ex vivo analysis, and/or other). To support further development of ML models for clinical applications, model explainability is highly important towards enhancing generalisability, trustworthiness, causality, transferability, informativeness, confidence, accessibility and interactivity. Last not least, being of the most promising topics in ML/medical imaging research, the main challenge for developing explainable models is to improve explainability whilst maintaining high learning performances.

The main objective of this special issue is to attract original high-quality research and survey articles that reflect the most recent advanced on explainable ML models in medical imaging (MRI, CT, PET, SPECT, Ultrasound and other), by investigating novel methodologies either through interpreting algorithm components and/or exploring algorithm-data relationships.

We welcome researchers from both academia and industry, to present their state-of- the-art scientific developments, technologies, and ideas covering all possible aspects of explainable ML models in medical imaging.

Topics

The topics include, but are not limited to:

  • Develop and interpret ML models in single- or multi-modal (MRI, CT, Ultrasound, PET, SPECT) imaging (using either single- or multiple-inputs and thus, biophysical information)
  • To improve explainability, combine ML with biophysical modelling and/or visual assessments from additional/complementary imaging modalities (e.g. multiple sequences in MRI, or combining MRI with Ultrasound, CT, PET or SPECT)
  • To improve explainability, combine ML with other types of “reference standard” input data (e.g. clinical data, electrophysiology signals, molecular analysis, invasive methods) that can enhance ML interpretability in medical imaging
  • Enhance explainability through combining multiple tasks (e.g. segmentation and/ or image synthesis)/ Multi-task learning on multi-modality medical images
  • Enhance explainability by incorporating graphical deep learning models for single- or multi-modality image analysis
  • Solidify explainability in cross-domain image synthesis between different imaging modalities or sequences (e.g. from different MRI sequences, or MRI and CT, etc.)
  • Transfer learning and transferability for single- or multi-modality medical images
  • ML model explainability in semi-supervised, weakly-supervised and unsupervised learning in medical imaging
  • Post-hoc explainability techniques for ML models in single- or multi-modality images
  • Enhance explainability through developing ML models to detect or predict pathology versus healthy statuses
  • Explain strengths and weaknesses of ML models through quantitative evaluation and interpretation of algorithm performances

Paper submission

  • Prospective authors are invited to submit papers in any of the topics listed above.
  • Instructions for preparing the manuscript (in Word and Latex formats) are available at: Call for Papers (main track)
  • Please also check the Guidelines.
  • Papers must be submitted electronically via the web-based submission system.

Organizers

  • Giorgos Papanastasiou, University of Essex, UK
  • Alba García Seco de Herrera, University of Essex, UK
  • Chengjia Wang, University of Edinburgh, UK
  • Heye Zhang, Sun Yat-sen University, China
  • Guang Yang, Imperial College London, UK

Program Committee

  • Sotirios A. Tsaftaris, University of Edinburgh, UK
  • Gabriele Valvano, IMT Lucca, Italy
  • Victor Gonzalez-Castro, University of Leon, Spain
  • Lin Gu, RIKEN AIP, University of Tokyo, Japan
  • Hao Dong, Peking University, China
  • Zhangming Niu, Aladdin Healthcare Technologies, Germany
  • Chunliang Wang, KTH Royal Institute of Technology, Sweden
  • Sivarama Krishan Rajaraman, NIH/NLM, USA
  • Emanuele Trucco, University of Dundee, UK
  • George Matsopoulos, National Technical University of Athens, Greece
  • David Rodriguez Gonzalez, University of Cantabria, Spain
  • Sammy Danso, University of Edinburgh
  • Xurui Jin, Duke Kunshan University, China-USA
  • Adrian Clark, University of Essex, UK
  • Adrian Martín Fernández, Pompeu Fabra University, Spain
  • Eirini Christinaki, KU Leuven, Belgium
  • Oscar Jiménez del Toro, University of Applied Sciences Western Switzerland, Switzerland
  • Rakkrit Duangsoithong, Prince of Songkla University, Thailand

Special Issues

A journal special issue will be proposed to authors with accepted papers for submitting an extended version of their work.

More Information

Official website: https://essexnlip.uk/cbms2021/

Easychair platform for submissions: https://easychair.org/conferences/?conf=cbms2021