There is typically much reflection whenever tipping from one decade into the next, as we look back at what has transpired over the past 10 years. When it comes to omics technologies and their application to precision medicine, there have certainly been significant advances. Notable among them would be the completion of the 100,000 Genomes Project which both commenced and reached its final goal of 100,000 whole genomes sequenced within the decade and has served as the springboard for the launch of a genomic medicine program by England’s National Health Service. Today, there are more than a dozen country-wide efforts focused on collecting health and sequencing data of large and diverse swaths of each countries’ citizens—all with an eye toward using these vast data to improve how we provide healthcare for both large populations of patients and, ultimately, individuals.
But are we yet able to consistently provide precision medicine? In small niches, yes. So, while much progress has been made, it is fair to say more advances are needed before precision medicine and/or genomic medicine is standard practice. So how will we get there? Read on as four industry watchers share with you their thought on what may happen in the next ten years to advance the field.
The World of Genomics in the Decade to Come
Niv Mizrahi
CTO and co-founder, Emedgene
Over the next decade, patient genetic data will cross over from genetic labs into health systems, incorporated into EMR systems, and informing clinical decisions regularly. This will include a shift in the use of genetics from diagnostics to patient care, where we utilize variant and pathway information in order to proactively affect disease mechanisms. Several trends will converge in order to make this a reality.
First, next-generation sequencing (NGS) cost reductions and technological advances are leading us to a point where regardless of the genetic test done, it always makes sense to sequence a patient’s whole genome. Once we do this, we gain access to far more information than the original diagnostic question that led to sequencing, providing we can reinterpret the data.
Second, there is a growing utility for a wide array of genetic tests performed at various points in a patient’s life, from healthy population screening, through pharmacogenomics, carrier screening and more. We expect the rapid pace of research to introduce more clinically validated applications for genetic testing, and also increase the diagnostic yield on each individual test. On the flip side, research will also advance in prevention and care, using the insight we gain into disease mechanisms to improve patient outcomes.
However, genome interpretation is currently at a chokepoint, making it difficult to scale genetic-based care. This is due to the rapid growth in the genetic testing market, along with a move to NGS, which is significantly more complex to interpret. Working on interpreting this data are only a few thousand geneticists and genetic counselors worldwide. If all of the U.S. certified geneticists and their peers worldwide worked only on rare disease patients, we estimate the worldwide capacity of interpretation in 2020 is capped at roughly 2.4 million tests, just under the predicted volume of rare disease patient testing, which is expected to hit 2.5 million. And that’s excluding any types of genetic testing like hereditary cancer, healthy population screening, population genetics projects, all of which are interpreted by the same genetics workforce.
Over the next few years cognitive AI solutions that automate genetic interpretation will be widely adopted, alleviating the interpretation bottleneck and enabling the growth in genetic testing. These solutions will deliver succinct results and condense the interpretation research cycle (which can be as long as 16 hours in the case of a rare disease), keeping the geneticists in the loop, but making sure they spend significantly less time per test, and enabling high-throughput testing. This type of solution won’t be achieved with any single machine learning algorithm or neural network, but a whole stack of AI algorithms coming together to replicate the work of geneticists in what we call cognitive genomics intelligence.
Which leads us to a third trend. Once cognitive genomics intelligence can automate interpretation, it can also be used to automate re-interpretation, or to query the patient’s genome for different diagnostic questions at different points in time. Imagine a cognitive genomic intelligence embedded in the EMR, activated throughout a patient’s life. This is a core enabling technology for genetics to cross over from the lab to the clinic, from diagnostics to care. Our expectation is that these systems will provide a robust layer of explainable AI, so that we can eventually make the results accessible for any clinician.
Protection and Individual Control of Personal Health Data Will Fuel Research
Dawn Barry
president and co-founder, LunaPBC
Discovery flows from research and research requires data. This longstanding truth will dramatically change in the decade ahead through how crucial data is acquired, aggregated, controlled, and protected. This transformation will occur as a consequence of maturation in our thinking around personal data sovereignty, and accountability and transparency in data stewardship.
A marked change in public sentiment around health data privacy is afoot. Google is under investigation by the Office of Civil Rights in the Department of Health & Human Services because they partnered with Ascension, the nation’s largest nonprofit health system, which provided health data without notifying individuals that their information was disclosed. This, despite asserting that they were compliant with HIPAA regulations that protect disclosure of personal health information. And on January 1st, the strictest data privacy law in the U.S., the California Consumer Privacy Act, took effect strengthening consumer data privacy rights in the country’s most populous state.
There is a growing lack of trust among the public in the institutions holding, buying, and using people’s data, which has fueled fears that data may be used against individuals and their families—from discrimination to “tailored” information feeds. People also now better understand the value of their personal data, and that they are not sharing in the value created from it. With DNA data in particular, people are now keenly aware that this information is shared within families, as numerous 2019 headlines detailing law enforcement applications demonstrated.
Data fuels research which in turn fuels discovery, but ultimately it is people who fuel this data—sick people, healthy people, old and young people, rich and poor people, people of all colors. They are the best curators of their health condition. If the past ten years are remembered as the decade that made genome sequencing for disease research possible, I believe the next ten years will see us execute discovery with a more holistic and inclusive lens. We will broaden our study beyond disease research to human health—with ‘health’ defined as more than just the absence of disease—and quality of life with the recognition that genetics is a mere 30% contributor to premature death. Human behavioral patterns, social circumstances, health care, and environmental exposure contribute the remaining 70%. I hope transcriptomes, microbiomes, and epigenetics will complement DNA datasets, and that person-reported, real-world, and environmental information will be included.
I’ll use the next decade to champion raising the standing of people from subjects of research to partners in discovery. In our increasingly digital world, and respecting that all personal health data—including DNA, health records, social and structural determinants of health, and clinical outcomes—starts with people, it stands to reason that if people are included, it represents a step function increase in discovery. As partners in discovery, we must win people’s trust starting with transparency and assurances they are in control over how their data is used, who it’s stored with, and empower them with the ability to un-share all their data, at any time, if they wish.
Research has suffered for lack of data scale, scope, and depth, including insufficient ethnic and gender diversity, datasets that lack environment and lifestyle data, and snapshots-in-time versus longitudinal data. Artificial intelligence is starved for data that reflects population diversity and real-world information. I worry about the impact on research if people disengage with science and digital tools for fear of privacy violations. It’s time to feed discovery with data that reflects the diversity of the population we wish to serve. I believe people are the key to the next generation of discovery, and that protecting their privacy will empower discovery.
How will Digital Health, Big Data, and Artificial Intelligence Impact Personalized Medicine and Healthcare in the Next Decade?
Chris Cournoyer
board of directors, CareDx
Let us examine the current digital landscape as context to answer the question of what technological advances will mean to healthcare in the next decade. We are in the early days of applying technology to healthcare in comparison to other industries. Implementing electronic health records (EHRs) was only legislated by the HiTECH Act in 2009. In the rush to comply with the Meaningful Use standards, the implementations have most often resulted in digitized records that contain valuable clinical information buried as text in pathology reports, and physicians’ and nursing notes. Additionally, the majority of EHR solutions are not currently capable of integrating a patient’s genomic and clinical profile into a seamless patient record. Consolidating patient data even within an integrated health system is challenging, let alone across health systems.
Numerous challenges exist as we exit this decade but there are a number of new tools, platforms and niche solutions to fill the gaps with the existing EHR solutions. The creation of datasets with manual curation, natural language processing tools, and the early use of real-world evidence shows promise in offering new insights for enhancing care protocols and predicting clinical events earlier. Pilots of artificial intelligence in those areas where we can amass large enough datasets, such as imaging applications, have exhibited success and piqued our interest in expanding into other areas of healthcare.
In the next decade, new legislation providing patients access to their data along with new solutions to integrate patient data should provide the impetus to create truly robust patient data repositories. Patient data set will extend well beyond the boundaries of the existing EHR record and incorporate all patient data including patient ambulatory visits, wearable device data, bio sensor patches, medication compliance devices, and even observed patient behaviors and patient reported outcomes. All of this data will be assembled into a single, longitudinal, comprehensive view of the patient. New standards will evolve to allow integration, enabling more robust real world data sets. This will provide a significant benefits in oncology, where the patient is truly an n-of-one, the challenge of large enough real-world data sets will be overcome, and oncology patient cohorts will be defined in granular clinical and molecular terms.
Applying AI tools to the more robust data sets will produce new algorithms and insights and—when combined with technology to better connect the patient and physician—should enable more proactive patient care models. Chronic diseases such as a hypertension and diabetes will be monitored in near-real time, and critical clinical events will be predicted in advance allowing earlier communication and intervention. Enhanced monitoring of oncology patients with liquid biopsies to detect resistance mutations has already begun, but in the upcoming decade we should see greater adoption and wider dissemination of knowledge of next step therapy options across a greater number of cancer sub-types. For organ transplant patients, early detection of organ rejection is possible and, coupled with real-world evidence of a patient’s immunosuppressant drugs and outcomes, should provide the basis for optimizing patient-specific drug protocols, thus improving quality of life and outcomes.
It is with great enthusiasm and hope that the next decade will leverage the foundational infrastructure and tools we have been investing in over the past two decades to finally create a dynamic learning system in health care to provide better patient care and outcomes.
Health Data as Medicine
Ardy Arianpour
CEO and co-founder, Seqster
As we look ahead to 2030 it seems clear to me that precision medicine will see a fundamental shift in focus away from providers and physicians to patients driven by person-centric health data interoperability platforms that integrate episodic EHR, baseline genetic (DNA), continuous fitness and device data streams as well as other emerging data like Social Determinant of Health. In the past 10 years (2010-2019), the industry talked a lot about patient-centric trials, but it was missing one thing—the patient. As a result of siloed data sets and lack of patient engagement, we never were able to fully realize the dream of precision medicine.
For any interoperability technology to succeed, the patient needs to be engaged and truly in control. The patient experience will be instrumental in providing the right care in combination with the right technology. In the next decade we will demonstrate that health data is medicine on a massive scale as opposed to one-off cases or rare anecdotes.
Another challenge that a person-centered interoperability platform addresses to enable precision medicine is the absence of key patient outcomes data. New technologies to capture health data across the entire health spectrum will encourage clinicians and researchers to turn any de novo cohort into mini-Framingham studies without the need for cumbersome, difficult, and costly processes every two years to collect data. Instead, the data will be updated in real time providing tools like AI and machine learning the necessary real-world data that will advance healthcare significantly beyond what is possible today. We have advanced genomic technologies, diagnostics and many new rapid ways of generating results. However, we have deeply missed the availability of longitudinal data and effective ways in which patients interact with their health data. We will finally give patients a way of connecting all the dots and personalized medicine will finally live up to its expectations.
Healthcare in general continues to move in the direction of person-centric care. This more active role for patients includes responsibilities such as improving healthy behaviors, self-management of chronic conditions, and engaging in shared decision making with healthcare providers. All of these activities will be facilitated by information technology platforms that patients actually control and actively manage. These patient-centric health IT platforms will also enable individuals to have greater opportunity to review quality and cost information thereby providing opportunities to make more informed decisions on where to seek care for themselves and their families.
Regulatory policies, such as the 21st Century Cures Act, are also accelerating the adoption of patient-centric health IT infrastructures and will significantly advance precision medicine as a result.
Up until now, precision medicine has largely been in the hands of the few who have access to genomic data and disease information. I am already seeing signs that Precision Medicine will be in the hands of the people for whom it matters most—the patient.