Knowing whether chemotherapy has been successful in treating cancer is always critical, especially in the case of bladder cancer in which treatment failure often leads to radical cystectomy, or bladder removal surgery. Experienced physicians are able to make that determination. Now, a new machine learning tool has been created to improve the accuracy of the evaluation of chemotherapy treatment in bladder cancer patients. Testing of the new tool shows that, although it can not replace a human entirely, it can help less experienced physicians make the right call.
“If you use the tool smartly, it can help you,” said Lubomir Hadjiyski, Ph.D., a professor of radiology at the University of Michigan Medical School and the senior author of the study. The tool is built in such a way that it can be adapted to evaluate other solid tumors in the future. Hadjiyski pointed out that his lab has created AI tools capable of evaluating treatment response assessment and monitoring of head and neck, breast and lung cancer.
In the case of bladder cancer, surgeons often remove the entire bladder in an effort to keep the cancer from returning or spreading to other organs or areas. More evidence is building, though, that surgery may not be necessary if a patient has zero evidence of disease after chemotherapy.
However, it’s difficult to determine whether the lesion left after treatment is simply tissue that’s become necrotic or scarred as a result of treatment or whether cancer remains. The researchers wondered if AI could help.
“The big question was when you have such an artificial device next to you, how is it going to affect the physician?” Hadjiyski said. “Is it going to help? Is it going to confuse them? Is it going to raise their performance or will they simply ignore it?”
Fourteen physicians from different specialties – including radiology, urology and oncology – as well as two fellows and a medical student looked at pre- and post-treatment scans of 157 bladder tumors. The providers gave ratings for three measures that assessed the level of response to chemotherapy as well as a recommendation for the next treatment to be done for each patient (radiation or surgery).
The researchers caution that, although conducted at multiple institutions, the current study is a small one. They are currently increasing the sample size with data from multiple institutions. They also caution that the tool cannot replace the judgment of a physician. For example, in the current study the AI tool made errors when it encountered the presence of catheters and metal prosthetics. These degrade the image quality causing the AI tool to make mistakes. “Interestingly,” Hadjiyski explained, “in many of these cases, the observers were not adversely affected by the erroneous AI tool scores and correctly stood by their initial decisions, indicating that observers usually were not swayed in the wrong direction.” In comparison, when observers incorrectly classified a cancer, but the AI tool did not, provision of AI tool scores often persuaded observers to modify their assessment in the correct direction. As a result, use of AI tool improved physicians’ diagnostic accuracy and reduced physicians’ variability.
According to Hadjiyski, AI tools are increasingly becoming an integral part of the clinical workflow. “It’s very important for the physicians to understand the advantages and limitations of the AI tools and how to use them properly in order to achieve a maximum treatment impact and optimal patient management,” he said. Medical students do not currently receive formal training on using AI tools. “At the moment the training is very ad-hoc,” Hadjiyski, said. “It will be very important to include training on using these kinds of AI decision support tools as a part of the curriculum.”