Human bone full body and DNA molecular structure
Credit: Yuichiro Chino / Getty Images / Moment

Maybe it was the constant buzz at AMP 2024 in Vancouver about the FDA’s regulation of laboratory-developed tests (LDTs), but as I explored the posters and booths showcasing the latest AI-powered diagnostics, I couldn’t help but wonder if these potentially revolutionary read-outs will ever see the light of day.

If relatively simple and tangible biochemical tests, such as a binary “yes/no” result from testing an analyte, are about to be put under a microscope, I’d imagine AI-enabled diagnostics are about to get put onto the operating table.

Can an AI algorithm even be an LDT?

One important thing to figure out is when the AI algorithms are seen as part of the LDT as a whole medical device and when it is seen as its own “Software as a Medical Device” (SaMD).

Roger Klein, MD, PhD, a leading authority on public policies related to the implementation of precision medicine, was of the opinion that an AI algorithm in itself cannot be an LDT but rather a part of an LDT. Along these lines, the FDA has authorized nearly 1,000 “AI-enabled” medical devices, many diagnostics, since 1995 (the FDA released an updated list of 951 such entities earlier this year). However, not everyone agreed, with others having the perspective that an AI algorithm can be considered an LDT if developed and used internally within a single clinical laboratory.

Regulations for AI algorithms in clinical practice

The emergence of AI platforms similar to LDTs poses new challenges for regulators, regardless of whether the AI algorithm is an LDT component or an LDT in and of itself. There is currently no established process for the regulatory evaluation of AI-based tools. Although the Food and Drug Administration (FDA) has established numerous regulations to guarantee testing quality in clinical testing laboratories, the use of LDTs in the U.S. has traditionally been the responsibility of the Clinical Laboratory Improvement Amendments (CLIA) program. That said, all of this is a bit up in the air now.

On the other hand, commercial AI algorithms distributed to multiple laboratories are not considered LDTs. If used for clinical work, a laboratory must still validate an AI algorithm for its intended use, irrespective of whether it is an LDT, LDT-like, or non-LDT tool.

Reimbursement issues for AI

Of course, you can’t discuss any medical device without considering who will pay for the test. The reimbursement landscape for AI-backed applications in health care is complex and evolving.

One area that is somewhat clear pertains to scenarios in which regulated software devices deliver clinical analytical services to a healthcare practitioner—sometimes referred to as “algorithm-based health care services” (ABHSs). These stand in contrast to AI that simplifies operational tasks or uses generative AI to answer clinical questions in an uncontrolled or informal setting. ABHSs generate clinical outputs to diagnose or treat a patient’s condition through the use of AI.

Due to antiquated reimbursement frameworks, U.S. health payment systems have difficulty integrating ABHS (artificial intelligence and behavioral health software) technologies. These frameworks lack specific billing codes and standardized economic impact and efficacy evaluation criteria. Problems like producing strong proof of better patient outcomes and handling liability issues are making it harder for healthcare systems to evaluate and authorize these ABHS tools for coverage, causing innovation in the field to outstrip their capacity.

Two possible solutions are making Medicare pathways more official (like an Add-on Policy for software) and testing out different payment models (like performance-linked or episode-based reimbursements). Another is developing specialized reimbursement codes for ABHS. Collaboration among stakeholders—developers, healthcare providers, payers, and regulators—is key to establishing evidence-based guidelines. Using data from the real world and continuous learning systems can improve and make the ABHS tools useful for a long time to come.

Human-generated decisions

The fact remains that some patients would prefer to rely on their primary care physician than engage with AI-assisted healthcare decisions. At the end of the day, healthcare is not a sterile, robotic practice that lives in some virtual universe of ones and zeros.

Also of Interest