Artificial intelligence (AI) is starting to make its mark on medicine. By capturing information from a variety of sources including electronic health records, genomics, and wearable devices, scientists and engineers hope to use machine learning (ML) to aid diagnoses and predict patient outcomes. Some have already begun to demonstrate the potential of this technology. For example, a team at Google recently reported a deep learning algorithm that detects diabetic retinopathy with a high level of specificity and sensitivity. Other groups have developed AI programs to accurately predict breast cancer risk, identify colonic polyps, and to classify lung cancers based on prognosis.
Despite its potential, this technology is far from perfect. For instance, Google Flu Trends—an algorithm that uses internet searches to estimate flu prevalence—vastly overestimated flu prevalence in the U.S. in 2013 despite an otherwise good track record. Earlier this year, an investigation by STAT News found that IBM’s Watson for Oncology was struggling to live up to expectations.
As AI further penetrates the field of medicine, some are concerned about unexpected consequences. Federico Cabitza, Ph.D., a professor of human-data interaction at the University of Milano–Bicocca in Italy, and his colleagues, Raffaele Rasoini and Gian Franco Gensini, both at the Center for Advanced Medicine in Florence, Italy, outlined some of their concerns in a JAMA article published earlier this year.
According to Cabitza, he and his co-authors share the feeling that the attention, expectations and hopes that some people currently have toward to the role of machine-learning decision support systems (ML-DSS) are “extremely off-balance,” due to the hype around the topic as well as the trivialization of what ML is and what it can actually do for our health and well-being.
“None of us really aimed to deny the potential advantages that these systems could bring into medical practice, nor do we believe that their introduction should be blocked or hindered in virtue of a prejudicial opposition to innovation,” Cabitza said. “In the same vein though, [ML] should not be adopted nor advocated in medicine on the basis of a sheer pro-innovation bias.”
Click here to access the rest of this article.