The MHub.ai project at Harvard has developed methods to execute machine learning models on medical images in an easy to use and standardized way. There is already a Slicer plugin for running MHub.ai format models. For this project, we propose to add two models of row different types to the MHub library.
Objective A. Test a MONAI-based deep learning model in MHub and validate the instructions for new developers to follow.
Objective B. Evaluate how well the MHub approach works for supporting pathology models in addition to radiology models.
Step 1. Port one of the pre-trained MONAIAutoSeg3D radiology models developed at Queens (by Andros Lasso et al.) for execution using the MHub framework as a docker container. Test the MHub I/O converters to read a DICOM image and reformat as needed from the input. Write out a DICOM segmentation object as the result.
Step 2. Start converting a published pathology DNN model (Rhabdomyosarcoma segmentation) for the MHub framework. This will Evaluate how well the MHub approach works for supporting pathology models in addition to radiology models. For example, can the same base Docker image work for pathology?
No response
MONAI AutoSeg3D: https://github.com/Project-MONAI/tutorials/tree/main/auto3dseg
Slicer Extension: https://github.com/lassoan/SlicerMONAIAuto3DSeg
pathology model: https://github.com/knowledgevis/rms-infer-code-standalone