(a) Scientific basis of the project
Radiotherapy (RT) is a key curative treatment modality of Head and Neck Cancers (HNC) that, despite technological advances and due to unavoidable irradiation of nearby normal tissues, leads to significant side effects. These toxicities are inter-related, with patients experiencing more than one type. As such, multi-toxicity prediction models using multimodal data can reflect more accurately the reality and interplay of side effects, potentially leading to more effective damage-limiting approaches that are essential in the personalisation of HNC RT.
(b) Skills development
The student will gain useful knowledge of clinical workflows and bottlenecks by working with their clinical supervisor on clinical data curation and verification. They will gain technical skills by working with their technical supervisor and his team on AI model development and evaluation, particularly related to the use of AI multimodal fusion strategies and interpretability techniques. They will also gain quantitative analysis and statistical skills through analysis of model performance and possible bias. Finally, they will become familiar with AI development standards such as the TRIPOD+AI guidance, which will be adhered to in the project.
(c) Overarching aims of the project
The primary aim is to develop AI-based multi-toxicity prediction models using clinical and demographic patient data and image-based data (radiation dose maps, annotated CT images). The models will be evaluated on an external cohort to assess generalizability and, if successful, establish a route to clinical deployment.
(d) Specific, measurable objectives
Year 1: Clinical data curated and prepared for model training, including clinical/dosimetric variables and imaging data/dose maps. Development of initial baseline multi-toxicity prediction models based on clinical/dosimetric variables using e.g. multi-variable logistic regression (MVLR), Random Forests (RF), Tabular Prior-data Fitted Networks (TabPFN) and neural networks.
Year 2: Develop multimodal deep learning model using clinical variables and imaging/dose data using early, intermediate and late fusion techniques. Internal and external validation of model completed.
Year 3: Analysis of demographic bias of models completed using metrics such as performance gap. Perform investigation of interpretability techniques such as GradCAM and SHAP.
Year 4: Develop decision-support prototype clinical deployment and “shadow mode” prospective evaluation.
(e) Potential 3-month rotation project
The student will use pre-curated patient data to develop and evaluate a single toxicity model for HNC patients. This will build upon published work by the two joint supervisors and investigate different machine and deep learning approaches for prediction.
Multimodal AI-based Multi-Toxicity Predictive Models in Head and Neck Cancer Patients Treated with Radiotherapy
Representative Publications
1. T. Young, J. A. Yeung, K. Sambasivan, D. Adjogatse, A. Kong, I. Petkar, M. R. Ferreira, M. Lei, A. King, J. Teo, T. Guerrero Urbano, “Natural Language Processing to Extract Head and Neck Cancer Data from Unstructured Electronic Health Records”, Clinical Oncology; Volume 41, 2025 (https://doi.org/10.1016/j.clon.2025.103805)
2. L. Humbert-Vidan, C. R. Hansen; V. Patel; J. Johansen; A. P. King; T. Guerrero Urbano, “External validation of a multimodality deep-learning ORN NTCP model trained on 3D radiation distribution maps and clinical variables”, Physics and Imaging in Radiation Oncology. 2024. (https://doi.org/10.1016/j.phro.2024.100668)
3. L. Humbert-Vidan , C. R. Hansen, C. D. Fuller, S. Petite, A. van der Schaaf, L. V. van Dijk, G. Verduijn, H. Langendijk, C. M. Montplet, W. Heemsbergen, M. Witjes, A. S. R. Mohamed, A. A. Khan, J. M. Querol, I. O. Cancio, V. Patel, A. P King, J. Johansen, T. Guerrero Urbano, “Protocol Letter: A multi-institutional retrospective case-control cohort investigating PREDiction models for mandibular OsteoRadioNecrosis in head and neck cancer (PREDMORN)”, Radiotherapy and Oncology, 2022 (https://doi.org/10.1016/j.radonc.2022.09.014).
4. E. Puyol-Antón, B. S. Sidhu, J. Gould, B. Porter, M. K. Elliott, V. Mehta, C. A. Rinaldi, A. P.King, “A Multimodal Deep Learning Model for Cardiac Resynchronisation Therapy Response Prediction”, Medical Image Analysis, 79:102465, 2022. (https://doi.org/10.1016/j.media.2022.102465)
5. T. Dawood, C. Chen, B. S. Sidhu, B. Ruijsink, J. Gould, B. Porter, M. K. Elliot, V. Mehta, C. A. Rinaldi, E. Puyol-Antón, R. Razavi, A. P. King, “Uncertainty Aware Training to Improve Deep Learning Model Calibration for Classification of Cardiac MR Images”, Medical Image Analysis, 80, 2023. (https://doi.org/10.1016/j.media.2023.102861)
6. E. Puyol-Antón, B. Ruijsink, J. Mariscal-Harana, S. K. Piechnik, S. Neubauer, S. E. Petersen, R. Razavi, P. Chowienczyk, A. P. King, “Fairness in Cardiac Magnetic Resonance Imaging: Assessing Sex and Racial Bias in Deep Learning-Based Segmentation”, Frontiers in Cardiovascular Medicine, 9, 2022. (https://doi.org/10.3389/fcvm.2022.859310)