A machine learning algorithm can predict whether people with major depression will respond to a common antidepressant after only 1 week.
Credit: Yuuji/Getty Images

Eric Topol, MD, founder and director of the Scripps Research Translational Institute, recently provided a keynote interview for the AI in Precision Oncology virtual event in which he described his vision of how AI can help redefine cancer screening and diagnosis. Topol was interviewed by Doug Flora, MD, editor in chief of the journal AI in Precision Oncology from Mary Ann Liebert Inc., which sponsored the presentation. Below is a small selection of the interview on the role AI will play in the future of clinical cancer care.

Doug Flora, MD, Editor in Chief, AI in Precision Medicine: Can you speak a little bit about changing the way we think about screening cancers, using more polygenic risk score and more AI-driven algorithm?

Eric Topol, MD, founder and director of the Scripps Research Translational Institute: I’m so glad you brought that up, because that is something that I feel so strongly about—that we’ve got this all wrong. That is, we’re only picking up 12%, 14% of all the cancers that are being diagnosed—important cancers—through our mass screening. It is wasting tens of billions, if not hundreds of billions dollars every year. It’s inducing a lot of anxiety for all the false positives, like as in mammography, but also other screening. And it’s all based on age, which is so dumb.

Now, when you start to think about the fact that cancer is occurring in younger people, much more commonly now, people in their 20s are coming across with colon cancer, and women with breast cancer in your 30s. Well, if you just use the current (cancer screening) criteria, we’re going to miss these people. And this is of course, not acceptable. So is there a better way—and I am convinced there really is—and we should be going after it. And that is there’s layers of data, which would define the risk of each individual. We’ve already seen, just for pancreatic cancer, using datasets from all the country of Denmark and the U.S. veterans dataset, that you can pick up pancreatic cancer risk from the notes, from the notes, lab tests, things where we wouldn’t see the trend. Then you start getting unstructured text, you start putting in polygenic risk scores, which are very inexpensive to obtain and we have those for most of common cancers, no less than cancer predisposition genes that are easily obtained. So we can define risk. And with AI, picking up things in images that we can’t see.

So if we start to reboot how we do cancer screening, I think we’re going to get to a point where we can narrow down the field. For example, 88% of women will never develop breast cancer. Why do those women need to have mammography every year or two? So let’s get this done. Let’s define risk and let’s not miss young people who are at risk for cancer. You know, we have things like the cell free tumor DNA test that we can use, we have clonal hematopoiesis that we’re not using, chip tests that you can get even through sequencing. So there’s lots of ways we can do this, but we can’t be complacent about how we do screening now because it isn’t working. It’s wasteful. The cancers that are being picked up waiting for some symptoms or scans to be so abnormal, they’re often late. We’re not changing the natural history of cancer. We’ve got to get better at that too.

Flora: We’ve spent most of the last couple decades refining screening for one individual organ cancer. Certainly, there are AI tools that are starting to identify areas that might need attention from an endoscopist on colonoscopies. You’ve written extensively about these pattern doctors being outpaced now by machine learning and by training these machine learning modules to identify things in their faintest footsteps. Let’s touch on pattern doctors and what these tools are able to do now.

Topol: Well, you know what’s interesting is that gastroenterologists have led the field of AI, doing randomized trials. There was very recently 33 randomized trials from all around the world, many from China. But now, most places around the world have done randomized trials and, uniformly, the pickup of polyps is substantially better when machine vision, (in) real time is being used during the colonoscopy. Interestingly, there are also studies to show that as the day goes on, the gastroenterologist is more likely to miss those polyps. Now, we haven’t seen a paper yet, of course, that shows by picking up this significantly higher rate of adenomatous polyps, that it changes the natural history of cancer. But that, I think, is pretty likely.

We’ve already seen 80,000 women randomized with mammography with AI or without AI and the AI helped tremendously in accuracy of diagnosis and reducing the time of review of the scan. So we’re seeing some great, compelling evidence for the benefit of the patterns. And I would extend that because we’re talking about cancer to pathology slides. It’s amazing that from a whole slide image, you could get the driver mutations, the structural variations that are in play, whether it is actually a malignancy, or the primary source of that tumor, and even the prognosis from the side to a reasonable level of accuracy. We’re not using that. We still are in the mode of pathologists that are not in agreement about what that H&E slide shows. So we can do better with patterns of slides and of all kinds of medical imaging.

Flora: (Let’s speak about) the Internet of Things and these smart hospitals and homes that you’ve referred to in a couple of your Substacks. As we move into the future of medicine, where do you see this going in the next two years?

Topol: In the next couple of years I’m hoping that we’ll start to see cancer screening get upended. We won’t have it finalized, but at least some of the trials are ongoing now to challenge the old way of doing cancer screening. I think that there’s a lot of moving parts here. We will get a diagnosis improved, whether it’s because the accuracy of scan interpretation in the next couple of years, or whether it’s because each doctor, through their health system practice, has access to a GPT support that gives them a differential diagnosis of difficult diagnoses. So they’re at least thinking of things that they couldn’t.

We have to get rid of the rush job, of course, because if you only have seven minutes for a routine visit, that’s not enough to hear about a patient’s concerns and to think. So one of the things we have to work on in these next few years is not let the AI make things worse, not let more patients get squeezed into any daily schedule. That’s a challenge because we’ve got a lot of non-physician overlords out there that are making the call as to “Oh, well, you’re more efficient now. Let’s, let’s get your schedule filled up even more.” These are things we have to confront in the next couple of years that are really important. Because this is a very big, if not the most extraordinary, transformation of medicine that we’ll see in our lifetime.

But we have to plan ahead. What’s going to be the big factors. There will be tools to summarize every aspect of a patient’s data before you even start to look at their chart, before you go see the patient. They will be ready in the next couple of years and not just adding to the medical diagnosis accuracy. So the medical literature, it’s very hard for us to keep up. Two years from now, don’t worry about that. It’ll keep up for you. It’ll get you the daily skinny, if you want, on everything in your field. Because the corpus of medical literature for you is something that is right in the sweet spot of generative AI.

Also of Interest