AI-Generated Cancer Treatment Plans Better, Faster—But Yet to be Validated in Patient Settings

AI-Generated Cancer Treatment Plans Better, Faster—But Yet to be Validated in Patient Settings
[Credit: Deepcell]

A new study shows that artificial intelligence (AI) has the potential to make creating radiation cancer treatment plans better and faster. However, the study, which involved cancer patients and physicians, also cautions that AI-based systems must be validated in a clinical setting.

In the study, which was published in the June 3, 2021 issue of Nature Medicine, a team of researchers compared physician evaluations of radiation treatments generated by an AI machine learning (ML) algorithm to radiation treatments generated by physicians. The ML-generated treatments were chosen over the physician-generated plans in 72 of the 100 cases.

“We have shown that AI can be better than human judgement for curative-intent radiation therapy treatment. In fact, it is amazing that it works so well,” said Chris McIntosh, M.D., lead author of the paper, Scientist at the Peter Munk Cardiac Centre, Techna Institute, and Chair of Medical Imaging and AI at the Joint Department of Medical Imaging and University of Toronto.

McIntosh and his colleagues also found that, in 89 percent of the cases, the ML-generated treatments were considered clinically acceptable treatments. They also found that the ML radiation treatment process was faster than the human-driven process by 60 percent, reducing the overall time from 118 hours to 47 hours. Researchers pointed out that this method has the potential to improve quality of care, as well as reduce costs.

The study did not stop there. Researchers were also interested in what actually happens when AI systems are deployed in a real-world clinical setting compared to a simulated one.

“There has been a lot of excitement generated by AI in the lab, and the assumption is that those results will translate directly to a clinical setting. But we sound a cautionary alert in our research that they may not,” said Tom Purdie, Ph.D., senior author of the study, Medical Physicist at the Princess Margaret Cancer Centre and Associate Professor, Department of Radiation Oncology, University of Toronto.

Researchers asked treating radiation oncologists to evaluate two different radiation treatments — one ML-generated and the other human-generated — in two groups of patients who were similar in demographics and disease characteristics. One group had already received treatment. The other had not. The oncologists did not know the source (human or ML) of each treatment plan.

Human-generated treatments were created individually for each patient as per normal protocol by the specialized Radiation Therapist, while each ML treatment was developed by a computer algorithm trained on a database of radiation therapy plans from 99 patients previously treated for prostate cancer at Princess Margaret.

Although ML-generated treatments were rated highly in both patient groups, the results in the pre-treatment group diverged from the post-treatment group. In the group of patients that had already received treatment, the number of ML-generated treatments selected over human ones was 83 percent. This dropped to 61 percent for those selected specifically for treatment, prior to their treatment.

“Once you put ML-generated treatments in the hands of people who are relying upon it to make real clinical decisions about their patients, that preference towards ML may drop. There can be a disconnect between what’s happening in a lab-type of setting and a clinical one,” Purdie said.

The take-away conclusion for researchers developing AI systems for use in cancer treatment is that they need to pay attention to a clinical setting. Purdie put it this way:  “If physicians feel that patient care is at stake, then that may influence their judgement, even though the ML treatments are thoroughly evaluated and validated.”