Researchers at the University of Waterloo in Canada have developed an AI-guided computational tool that can predict how a brain tumor will grow and spread based on magnetic resonance imaging (MRI) data. This model could help physicians to personalize treatments to brain cancer patients.
Glioblastoma multiforme (GBM) is one of the most common forms of brain cancer and only a minority of patients with GBM survive for longer than two years. GBM tumors are typically treated with a mixture of surgery, radiotherapy and chemotherapy, but they are difficult to tackle due to their dense core, rapid growth and isolated location in the brain tissue.
Clinicians can better tailor their treatments to GBM patients by using MRI to assess how diffuse the tumor is in addition to its proliferation rate. They often incorporate these parameters into a mathematical model that can predict the tumor’s next move.
However, it can be difficult, costly and risky to get the data required for modeling each patient’s tumor. This means predictions must be based on smaller datasets, and this can lead to less accuracy in the model. If clinicians have the wrong prediction of a tumor’s growth, this could mean treatments are less effective and result in treatment resistance or recurrence of the tumor.
To overcome this limitation, the researchers developed a model using deep learning, a form of machine learning, to automatically gauge tumor parameters from MRI scans. It could then be used to create predictive models of the tumor’s progression without requiring lots of data from the patient.
In a study published in the Journal of Theoretical Biology, the team first tested their modeling tool on MRI scans of synthetic tumors. They then teamed up with St. Michael’s Hospital in Toronto to analyze MRI data from five GBM patients who had chosen not to receive treatments for their condition for undisclosed reasons. This gave the researchers a chance to see how GBM tumors grow in the absence of medical intervention.
As a result of the clinical and experimental data, the researchers were able to validate their AI-assisted model. The team now aims to include MRI data from tumors that have been treated in the datasets it uses to train the tool. This could increase the data trove from a handful of MRI scans to thousands.
“We would have loved to do this analysis on a huge data set,” said Cameron Meaney, PhD candidate in applied mathematics at the University of Waterloo and the study’s lead researcher. “Based on the nature of the illness, however, that’s very challenging because there isn’t a long life expectancy, and people tend to start treatment. That’s why the opportunity to compare five untreated tumors was so rare—and valuable.”
AI is being increasingly used to speed up tedious and repetitive tasks in cancer imaging such as tumor measurements and characterization. This leads to the potential for improving the detection of cancer in addition to the adoption of personalized treatments. However, the entry barrier is high, as big datasets and computational muscle are needed to let these models benefit more patients.
The main desired characteristics of a computational tumor modeling tool include the need for minimal data input from the patient, not being prone to errors when used across different facilities, being easily applicable to different datasets, and being capable of adapting to recent advances in cancer imaging, which come frequently.
According to the study authors, their deep learning-based modeling approach ticks these boxes, and, with more data added in the coming years, could allow clinicians to better tailor GBM treatments to the patient’s tumor.
“The integration of quantitative analysis into healthcare is the future,” concluded Meaney.