Nurse with masks
Credit: skaman306 / Getty Images

No one who works in the diagnostic industry was surprised that in the middle of a pandemic surge, diagnostics were in short supply. It’s not because the technology is hard or because we don’t know how to develop PCR or antigen tests or antibody tests1 or because scientists and labs didn’t think we would need them as a pandemic was breaking out; it’s because no one can build a product with an uncertain market, unknown demand, and unclear need. It is the basic market dynamics of supply and demand- a supply cannot be created for uncertain demand.

Historically our healthcare system hasn’t valued diagnostics; we don’t invest in their development or infrastructure, we don’t have a clear system to pay for them, and so when we need them, they aren’t there. The US values drugs and treatments for diseases at a much higher value than diagnosis and prevention. Indeed, the Center for Medicare and Medicaid Services (CMS) define the ‘clinical utility’ of a diagnostic as how it changes the management of the patient- diagnosis alone is often not sufficient to garner coverage.

Value for diagnostics would mean investment in research, development, and infrastructure, and in a clear way for manufacturers and laboratories to be paid for those diagnostics developed, whether by government, in the market, or a combination of both.

Instead of the reality of the US over the past two years, imagine this scenario instead:

In early 2020, large laboratories were given the greenlight by the CDC and FDA to begin developing and commercializing PCR tests for Covid-19, as well as the funding from the US government to quickly scale up the test development and infrastructure, while at the same time manufacturers were given funding to scale development and commercialization of at home antigen tests. Both types of tests were automatically covered by Medicare and commercial insurance with no out of pocket costs to the consumers.

The change? We would have known who had Covid, where it was spreading, and who needed to stay out of work and school and off airplanes.

By the summer of 2020, at home rapid antigen tests would have been available at every supermarket, pharmacy, and convenience store for $1 a test. It could have become common practice to use them before work meetings, school, business trips, weddings.

The change? We would have halted spread, known who was infectious, and regained a semblance of normalcy in our lives by using diagnostic data to isolate and avoid infection.

Consider that by the fall of 2020, we had significant data to know both who was at greatest risk to have poor outcomes with Covid from which comorbidities as well as what structural variations in DNA independently would indicate that a person was at higher risk.

Imagine if we had used that information to prioritizevaccine (and later booster) access, make recommendations on working in person or wearing additional PPE, to people based on their genomics and genetics, giving people who wanted it access to that information to make informed healthcare, work and personal decisions.

But we did none of these things.

Instead, we did what we do with most diseases: trial-error-medicine. We restricted access to all forms for Covid testing, either through lack of investment, price, or policy until the public, nearly two years into the pandemic, began demanding it, even when they had to pay for it out of pocket. Finally, the government is sending antigen tests to homes; unfortunately, they will arrive after the Omicron surge has crested in most of the country. Insurance companies have created bureaucratic barriers to prevent them from having to follow the Biden administration’s guidance on having to pay for Covid diagnostics paid out of pocket, in the hopes that people will give up (most will). We still have no tools to risk stratify patients and prioritize those most likely to have a poor outcome.

The Covid pandemic should be shining a bright light on the US healthcare system, because what has happened during the pandemic- failure to be able to diagnose, failure to be able to accurately assess risk until the patient is declining, failure to stratify patients- happens every day, across all other diseases.

We are slow to diagnose infections before they become septic, because we don’t want to pay for the diagnostic. We prescribe targeted therapies, meaning therapies that work only if a patient has a particular subtype of disease, without bothering to figure out if the patient has the correct subtype. We don’t even have the diagnostic tools to diagnose Alzheimer’s Disease or NASH, two of the biggest oncoming health crises of our time. Each of these, and so many other examples, affect smaller populations, but in aggregate, their cumulative totals far surpass Covid.

What do we need to change?

We need to invest, value, and pay for diagnostics in the same way we pay for drugs, hospitals services, and other interventions in healthcare. We need a regulatory system related to our payment system that clarifies the requirements for both development (how much evidence is required and parameters regarding risk) and reimbursement. If the reimbursement isn’t sufficient for the diagnostic to be commercially viable, we need a mechanism to subsize the development of diagnostic tools, so they can be available when we need them.



1. PCR tests for Covid-19 are used to diagnose infection, but also detect viral fragments during both the pre- and post-infectious periods. Antigen tests are best used to assess contagiousness. Antibody tests are used to determine a prior immune response within a relatively short (4-6 month window) post infection.


Hannah Mamuszka is CEO and Founder of Alva10, a strategic consulting firm that advises payers and diagnostic developers on the health economic and clinical impact of novel diagnostic tools.


Also of Interest