Pancreatic cancer
Credit: Science Photo Library - SCIEPRO/Getty Images

Artificial intelligence (AI) could help endoscopists to identify which solid lesions in the pancreas are cancerous and may be particularly useful for those with less experience, trial findings suggest.

The randomized, crossover-study revealed positive results from human-AI interaction and showed its potential for facilitating a clinical diagnosis.

The joint AI system used both endoscopic ultrasonographic images and clinical information and was particularly helpful for novel endoscopists, according to the findings in JAMA Network Open.

“In the future, this joint-AI model, with its enhanced transparency in the decision-making process, has the potential to facilitate the diagnosis of solid lesions in the pancreas,” reported Bin Cheng, PhD, from Huazhong University of Science and Technology in Wuhan, China, and colleagues.

Pancreatic cancer is a prevalent cause of masses in the pancreas and has an overall five-year survival rate of around just 10 percent, the authors noted.

Endoscopic ultrasonography has emerged as a valuable technique for diagnosing pancreatic cancer, but less malignant neoplasms and benign pancreatic conditions such as chronic pancreatitis can also manifest as masses in the pancreas.

While fine-needle aspiration or biopsy guided by endoscopic ultrasonography has significantly improved overall diagnostic accuracy, concerns linger about its relatively low and unstable sensitivity and negative predictive value.

In search of complementary techniques that offered additional information, researchers developed a multimodal AI model that used both endoscopic ultrasonographic images and clinical data to distinguish carcinoma from noncancerous lesions.

Endoscopic ultrasonographic images and clinical information on 439 patients from a single institution who had solid lesions in the pancreas were collected to train and validate the joint-AI model.

A further 189 patients from three other institutions were used to evaluate the robustness and generalizability of the model.

The team then evaluated it in a crossover trial at four Chinese centers, in which 12 endoscopists with varying levels of expertise were randomly assigned to diagnose solid lesions in the pancreas with or without AI assistance.

The retrospective dataset included 628 patients who underwent endoscopic ultrasonographic procedures, while 130 patients were prospectively recruited for the crossover trial.

The multimodal AI model showed robustness in sensitivity (0.88–0.99) and negative predictive value (0.86–0.99) across the diverse datasets, Cheng and co-workers report.

The area under the receiver operating characteristic curve ranged from 0.996 with the internal test dataset to 0.955, 0.924, and 0.976 with the three external test datasets, respectively.

The diagnostic accuracy of the six novice endoscopists with less than a year’s experience of endoscopic ultrasonography significantly improved with AI assistance (0.69 vs 0.90).

The authors note that expert and senior endoscopists exhibited a greater tendency to reject the predictions of the AI models but the provision of interpretability analyses, including Grad-CAM and SHAP, reduced this.

“For future clinical application, the results of interpretability analyses along with the predictions should be reported by the AI models simultaneously,” they suggest.

“Because clinicians can verify that the AI model is basing its predictions on the correct aspects of the [endoscopic ultrasonographic] images and clinical features, they are more likely to accept the model’s prediction.

“In addition, rather than viewing the model as a black box, clinicians can engage with the model’s reasoning and use it as a complementary tool to support their decision-making process.”

Also of Interest