Automatic segmentation of Organs at Risk in Head and Neck cancer patients from CT and MRI scans
Sébastien Quetin, Andrew Heschl, Mauricio Murillo, Rohit Murali, Piotr Pater, George Shenouda, Shirin A. Enger, Farhad Maleki
Published: 2024/5/17
Abstract
Purpose: To present a high-performing, robust, and flexible deep learning pipeline for automatic segmentation of 30 organs-at-risk (OARs) in head and neck (H&N) cancer patients, using MRI, CT, or both. Method: We trained a segmentation pipeline on paired CT and MRI-T1 scans from 296 patients. We combined data from the H&N OARs CT and MR segmentation (HaN-Seg) challenge and the Burdenko and GLIS-RT datasets from the Cancer Imaging Archive (TCIA). MRI was rigidly registered to CT, and both were stacked as input to an nnU-Net pipeline. Left and right OARs were merged into single classes during training and separated at inference time based on anatomical position. Modality Dropout was applied during the training, ensuring the model would learn from both modalities and robustly handle missing modalities during inference. The trained model was evaluated on the HaN-Seg test set and three TCIA datasets. Predictions were also compared with Limbus AI software. Dice Score (DS) and Hausdorff Distance (HD) were used as evaluation metrics. Results: The pipeline achieved state-of-the-art performance on the HaN-Seg challenge with a mean DS of 78.12% and HD of 3.42 mm. On TCIA datasets, the model maintained strong agreement with Limbus AI software (DS: 77.43% , HD: 3.27 mm), while also flagging low-quality contours. The pipeline can segment seamlessly from the CT, the MRI scan, or both. Conclusion: The proposed pipeline achieved the best DS and HD scores among all HaN-Seg challenge participants and establishes a new state-of-the-art for fully automated, multi-modal segmentation of H&N OARs.