An artificial intelligence (AI) program developed by a U.S. and Canadian team led by NYU Grossman School of Medicine can predict patient outcomes to a high level of accuracy by using information from clinician notes and patient records.
The tool is currently being used at the NYU group of hospitals to predict whether discharged patients are at high risk of being readmitted within one month.
As reported in Nature, the AI program—NYUTron—takes advantage of recent advances in natural language processing and can be used to complete many clinical and operational predictive tasks.
“Our findings highlight the potential for using large language models to guide physicians about patient care,” said study lead author Lavender Jiang, a doctoral student at NYU’s Center for Data Science, in a press statement.
“Programs like NYUTron can alert healthcare providers in real time about factors that might lead to readmission and other concerns so they can be swiftly addressed or even averted.”
Before using the tool in the clinic, the researchers trained NYUTron using millions of clinical notes from the electronic health records of 336,000 men and women treated by the NYU Langone group of hospitals between 2011 and 2020.
Many current clinical predictive models, including those using AI, rely on structured inputs to make predictions. This can be difficult when attempted to extract information from clinical notes, as each healthcare professional has a different note style and structure.
“This reliance on structured inputs introduces complexity in data processing, as well as in model development and deployment, which in part is responsible for the overwhelming majority of medical predictive algorithms being trained, tested and published, yet never deployed to assess their impact on real-world clinical care,” write the authors.
In the last few years, large language models comprised of large neural networks have been developed that are able to better read and interpret human language. In this study, Jiang and colleagues theorized that a large language model could be ideal for helping to navigate the unstructured clinical notes in medical records and make better clinical predictions as a result.
The researchers tested the accuracy of NYUTron at predicting five common tasks or outcomes for patients: 30-day all-cause readmission, in-hospital mortality, comorbidity index, length of stay, and insurance denial.
Using a statistical measure known as area under the curve, NYUTron showed a 5–15% improvement over standard predictive models for these five measures. For example, it was able to identify 85% of patients who died in hospital and estimated 79% of patients’ length of stay correctly.
“These results demonstrate that large language models make the development of ‘smart hospitals’ not only a possibility, but a reality,” said study senior author and neurosurgeon Eric Oermann.
“Since NYUTron reads information taken directly from the electronic health record, its predictive models can be easily built and quickly implemented through the healthcare system.”
Oermann adds that future studies may explore the model’s ability to extract billing codes, predict risk of infection, and identify the right medication to order, but cautions that NYUTron is designed to support healthcare providers and not replace their professional input into patient care.