Anne Paxton
January 2022—“Workflow” evokes a process that moves smoothly, like water, that doesn’t break down or grind to a halt, a sequence of steps that can be completed in a seamless manner.
Genomic workflows in information systems, however, have an especially poor fit with the concept of “flow.” As genomic data migrate from the laboratory to an electronic health record or from one EHR to another, significant gaps can result between generation, interoperability, and utilization that may lead the data to miscommunicate or mislead.
“The technology behind next-generation sequencing and genetic testing in general has advanced by leaps and bounds over the last 10 to 15 years,” says Alexis Carter, MD, physician informaticist, pathology and laboratory medicine, Children’s Healthcare of Atlanta. While the use of genomic testing has expanded rapidly, “information systems and electronic health records have really not been able to keep up.”
“The EHR is a sort of one-stop station for all information related to the patient’s health, including different diseases or disorders and workup—everything. So it is the ideal place for discrete genomic data to live,” says Somak Roy, MD, director of molecular pathology at Cincinnati Children’s Hospital Medical Center. But “the evolution of genomics and genomic health has essentially outpaced EHR systems’ ability to be interoperable with them.” As our understanding of the human genome and diseases expands, he adds, “we find new things that previously were interpreted differently because we just didn’t have the proper knowledge at the time.”
The problem is not only fast-evolving technology, but also the quantity and complexity of genomic data—compounded by the lack of necessary data standards and other factors, according to a working group of experts on EHR interoperability for clinical genomics data, formed by the Association for Molecular Pathology and chaired by Dr. Carter (with Dr. Roy as co-chair), which is exploring and recommending solutions.
“The whole reason this came to light, from AMP’s perspective,” Dr. Roy says, “is that we realized that given the stage of genomic medicine, we still don’t know how best to represent genomic data in the patient’s chart in the EHR.”
The troubles with genomic workflow have challenged laboratories and created pressure to find a fix. Far more than other laboratory tests, even one genetic test can contain a staggering amount and breadth of information, the working group points out. Adding to the problem, laboratories vary greatly in the scope and nature of their reporting and in their use of text or PDF files that transmit quickly but are difficult to plumb for data.
Meanwhile, the lack of interoperability often necessitates the use of paper records or manual transfer of data to and from instrument software, laboratory information systems, and EHRs. All of that creates a greater chance of error and potential patient harm with genomic data than with other traditional laboratory tests.
The working group’s initiative to achieve a consensus standard “is probably one of the biggest undertakings that AMP has started,” Dr. Roy says. But adapting the EHR to genomic data is also a huge problem, he says. Even though there are gaps between genomic data generation, interoperability, and utilization that have led to an array of potential errors and patient harms, there are controversies about a solution, the working group has found.
[dropcap]“O[/dropcap]ne of the purposes of the working group initially,” Dr. Roy explains, “was to be able to present the gaps in great detail. And the next thing is to be able to use this information that the workgroup has published in this manuscript to build upon further work, and that includes systematic review of the current literature on EHR and genomics data interoperability and sharing the findings with all stakeholders involved in this process, including EHR and LIS vendors and other professional societies.”
In addressing the gaps, the working group has the advantage that every person in the group is intimately involved in molecular testing, Dr. Carter says. “So we know where all the pitfalls and potential pitfalls are for how molecular data is represented in an EHR and where providers may misinterpret things if they are not careful—or, I should add, if we’re not careful about how we display the data.”
When Dr. Carter gives lectures on usability and EHR displays, her customary message is, “You should absolutely never underestimate the power of a bad display to harm your patients.” Laboratory directors at her institution, in fact, are required to inspect how results are displaying in their EHR to make sure the tests show what they want and the display is safe.
The pandemic has slowed some of the progress of the working group, but the eventual goal, Dr. Carter says, is to figure out how to get discrete variant data into an EHR in a way that will be safe for patients, and also “how we handle reanalysis, reclassification, and reinterpretation of variants.” Secondarily, she adds, the goal is to figure out how to get the data to be portable and exportable between organizations, also in a safe manner.
When Dr. Roy started his career in molecular pathology in 2012, benchtop NGS instruments were being introduced in the clinical space, he says. “There was not much knowledge or realization about what is involved in terms of data analysis and representation of genomic data in the EHR.”
Awareness is greater a decade later. “But what’s lacking is the granular information,” he says. “Most vendors do rely on professional guidelines or standard-making organizations that might say, ‘Here are the specifications. This is how you would implement, for example, an interoperability interface in your system.’” This doesn’t exist for genomics, perhaps in part owing to a lack of concerted effort among molecular pathologists, clinicians, health IT professionals, and EHR vendors, in his view.
Interoperability is the main gap, Dr. Roy says. There are more modern interoperability specifications such as FHIR (Fast Healthcare Interoperability Resources), but they are not yet in the mainstream. HL7 is still the predominant interoperability standard in the United States, and it would be a “herculean task,” he says, operationally and economically, to transition to FHIR across all institutions.
“The current interoperability and coding standards cannot appropriately represent the discrete details of genomic data generated by molecular pathology laboratories, to smoothly interoperate with EHR systems,” Dr. Roy says. Developing these standards as a concerted effort among the stakeholders is “the major gap to fill,” he says, and the key there is to represent discrete genomic data in the EHR, visualized properly in the patient’s chart.
An equally important part is the clinical test order, he says. “Even in places that do have electronic ordering systems, providing a critical piece of clinical information required for genomic testing is not consistent. For example, what is the provisional diagnosis? What prior genetic testing was performed for a patient?” Use of paper requisitions aggravates this situation, he notes, because the clinical information associated with a test order can’t be discretely represented.
EHRs often have to link data over time, Dr. Roy says. If a patient, for example, has a recurrence of a tumor after two years and a second round of molecular testing is ordered, “as performed right now, there’s a lack of standard or a mechanism as to how temporal genomic data is saved and presented to the provider in the EHR. So if a provider wanted to order molecular testing on the recurrent tumor, it would be great if the EHR could provide a pop-up saying, ‘Molecular test was performed earlier. Would you like to see the results?’”
Given the enormous quantity of data points that can be generated during genomic testing, test results can be difficult to interpret. “Part of the training that molecular professions undergo is learning how to discriminate between a true variant and background noise, and appropriately interpreting variants and their clinical significance.” To reliably transmit and represent the interpretations associated with genomic data, “the current interoperability standards need more work,” Dr. Roy says.
[dropcap]F[/dropcap]or the AMP working group, a key point of controversy in the course of setting standards has been whether to include variants of undetermined significance in genomic reports. “Most people do not include benign variants in their reports because there are a lot of them, and they don’t have clinical actionability for patients,” Dr. Carter says. “Having said that, their incidence determines significance. It’s interesting because the smaller the panel is, the more likely it is that those variants of undetermined significance get included. When you have large 400-plus gene panels, or exomes, these variants may not appear on the report because you can have hundreds of them.”
But even though people have strong opinions, pro and con, about reporting variants of undetermined significance, “it can be a challenge if we’re inundating a provider with hundreds of variants where we literally don’t know what these variants mean for the patient,” she says.
“The early attempts to make genetic modules in EHRs tended to focus on individual variants. And in some cases I’ve seen that the pathologist’s or the geneticist’s interpretations are divorced from the variant themselves,” Dr. Carter says, “such that it’s easy for a physician to lose that context and make a mistake in diagnosing a patient.”
For example, cystic fibrosis is an autosomal recessive disorder, so the patient has to have pathogenic variants on both alleles as inherited from each parent. “The EHR modules may have computationally parsed out all variants. But say there is a delta F508 variant and it says ‘pathogenic’ next to ‘heterozygous.’ Some providers may not realize that it’s just a variant on one allele and not both alleles, meaning that the patient is really a carrier, not necessarily affected by the disease,” Dr. Carter says.

Dr. Roy notes that the practice of including variants that may be pathogenic or clinically significant may differ for variants of uncertain significance (VUS) depending on panel content and institutional practice. “Professional society guidelines mention that VUS should be included in clinical reports, which I believe is the practice in many clinical molecular laboratories.” Although laboratory approaches vary, he says, the first category of variants is likely to be placed at the very top of the clinical report, while variants of uncertain significance are more likely to be included in the later sections of a report.
Some labs, he says, choose not to include variants of uncertain significance in their clinical report. “If they do include them, they don’t provide detailed information because there is a lot of it, and the concern is that the information might distract the clinician from the main findings.” Another category of variants is “benign or likely benign” variants—usually so labeled because they are common in the human population. Current professional society guidelines recommend not including them in the clinical report.
Conflicting opinions are often expressed on the scope of genomic data that should be stored in an EHR. “There are some people who will say they want to get all the raw sequence data from a sample stored in the EHR,” Dr. Carter says. “I don’t think that’s very practical if you have an exome with 100,000 benign variants.” Researchers may want to be able to access raw data, but she believes having all the raw data isn’t the best use of storage space. For example, “a 200-gene panel may be between 150 to 200 gigabytes per sample, so it’s a lot of data. If you shrink the data down to the variants of importance to the patient—meaning anything that’s not benign—then you’re talking about kilobytes, a much more manageable amount of data for patients.”
While storage is becoming less costly, it still comes at a cost, Dr. Carter says. “And if you have a really active molecular laboratory and you’ll need to maintain backups of all that data, it can get problematic pretty quickly.”
Inclusion of quality control and quality assurance data and other parameters stirs controversy as well. “I think everyone agrees they should be recorded and discretely stored in the lab, and some people would say they want all the data showing up in the EHR,” she says. “But the issue becomes how much ‘build’ are you going to put into that? Are providers going to be able to adequately interpret it, when you get into conversations about, for example, minimal residual disease?”
“A patient with acute myeloid leukemia, for example, might have a PML-RARA translocation that we detect by NGS. They undergo therapy and it knocks down their disease pretty far, but these assays can pick up a very low level of remaining translocation. So even if a patient looks negative under the microscope, the molecular evidence of molecular alterations may persist and that can indicate that the patient may regress or recur sooner than others.”
“A laboratory may set quality metrics to only report variants that are greater than five percent of the allele fraction,” Dr. Carter points out. “But when you start looking at a patient who has a history of an alteration and this alteration shows up in a subsequent sample at three percent,” a pathologist may believe, given the patient’s history, that the molecular evidence could be relevant. “Those are things that pathologists and geneticists may decide, and it could be a little confusing for some providers who are looking at it.” She also questions whether including QC and QA metrics for an entire run at the sample level is warranted. “I’m not sure how useful that is to providers,” given that other laboratory accreditation requirements set procedures for addressing QA or QC failures.
“There are some niche laboratory systems and there are EHRs that are trying to build genetic modules to handle this kind of data,” and the ones she has looked at are good attempts, Dr. Carter says. “But they’re not where they need to be, given the complexity of genetic data that we have now, in my opinion. It’s a very different place to be in where you are a provider receiving genetic data and not understanding all the complexities of the genomic data that is sort of under the hood.”
[dropcap]S[/dropcap]ome laboratories may wonder if they will be looking at a new generation of coding systems beyond SNOMED CT and LOINC and other standards when talking about how to represent genomic data. HL7 has a clinical genomics working group trying to define genomic data, Dr. Carter says. “But the challenge is that they decided to use LOINC as their only representation of that data. LOINC has limitations in dealing with laboratory complexity and interoperability. I’m not sure how we’re going to do the interoperability piece yet. We’re going to have to figure that out.”
[divider]
The tasks ahead
- Standardize the required data elements in electronic orders for genomic testing.
- Establish standard discrete data elements needed to optimize test utilization.
- Establish standards for transfer of hierarchical and genomic data.
- Develop standardized data structures for individual variants.
- Make display of genomic test results between laboratories sufficient yet usable.
- Standardize display of aggregated and longitudinal data.
- Establish consensus guidelines for requesting and providing reclassification of variants.
- Design how results should be displayed to patients.
- Standardize use of discrete genomic data for clinical decision support, especially drug-genome alerts.
- Integrate international standards for interoperability and data retrieval into EHRs.
- Standardize how genomic results are reported to outside organizations.
[divider]
Currently, just reporting SARS-CoV-2 results requires multiple LOINC codes, multiple SNOMED CT codes, and a Unique Device Identification code, she notes. “Those have to be put on every result and there are between seven and 10 different codes. When you talk about a single genetic variant, you’re talking about 30 to 50 data points on that one result. So you can envision how that gets maintenance-heavy from a coding perspective, very quickly.”
These challenges in how to manage the data, she says, explain why no one has solved this problem yet. “Part of the AMP’s project is to engage with some of the vendors to discuss possible plans. And help them develop things in a way that makes sense and that can be reproduced in a database model. Most vendors are looking for standards. It makes it easier for them to build their tools,” Dr. Carter says.
This particular kind of interoperability is a difficult area, she says. “It’s one that many other organizations have been working on.” The large-scale review of the literature that the AMP working group plans will show there have been several efforts to standardize before this one. Dr. Carter cites as leading examples the eMERGE (Electronic Medical Records and Genomics) Network project involving a consortium under the National Institutes of Health; Digitize, the HL7 clinical genomics working group; and the Global Alliance for Genomics and Health.
The AMP working group hopes that a publication will come out of the literature review. “In addition,” she says, “the group is working on potentially joining with multiple other professional societies and vendors to discuss the best ways to represent variants, reclassification, and reinterpretation in an EHR, as well as potentially amalgamated displays across different tests or genetic data. They aspire to develop a consensus variant data standard as to what is the best and safest way to segregate a variant away from its overall report.” One approach she believes should be taken toward the masses of data is to consider keeping some data in the lab and not necessarily transferring it into an EHR where it could be slowing the system down.
What lies ahead for the working group, in addition to the fairly comprehensive literature review, are several other projects the group hopes will lead to consensus around variants, reclassification, and reinterpretation, Dr. Carter says. “We’ll be looking at existing parallel efforts by other organizations to try to establish a single consensus-derived, evidence-based variant data standard for interoperability that will be safe and usable for electronic health records and other purposes.”
Achieving that standard is a lofty goal in itself, she says. But to be successfully applied, the standard will need to be accompanied by an ongoing dialogue with multiple EHR vendors about how the standard, when agreed upon, should be implemented, and genomic medicine must take on the responsibility to ensure the goal is met, Dr. Carter says.
“EHR programmers are experts in information technology and database administration and construction and managing this humongous EHR for a lot of customers, but they’re not experts in genomic medicine,” she noted in a recent webinar on EHR interoperability. “If genomic experts don’t make sure they are available to answer questions, the programmers are going to make their best guess—and sometimes their best guess is not going to be what we need.”
Anne Paxton is a writer and attorney in Seattle.