November 2024—Lab data displays, IT demands that outrun resources, at-home test results, and HL7 are some of what came up on Sept. 20 when CAP TODAY publisher Bob McGonnagle spoke online with pathologists and industry executives about laboratory information systems.
“It’s fire and forget,” said Ulysses G. J. Balis, MD, of the University of Michigan, about the lack of feedback from the EHR that a clinician has seen and understood a complex result. “Loss to follow-up is a real possibility,” he said.
Their conversation follows, and CAP TODAY’s guide to laboratory information systems begin here.
I’ll start with an overview of topics covered in last year’s roundtable on laboratory information systems. One is a need to integrate data between the LIS and EHR. We are still living in a world where dollars and staff are short and demands are diverse. Some clinicians want something concise; others want something complex and don’t want anything left out, which makes it difficult to simplify reporting. Ulysses Balis, can you comment on these dilemmas that everyone who does laboratory informatics faces?

Ulysses G. J. Balis, MD, associate chief medical information officer; A. James French professor of pathology informatics; director, Division of Pathology Informatics; and director, computational pathology laboratory section, Department of Pathology, Michigan Medicine, University of Michigan: The cumulative demands on the system, the lab, and the enterprise generally exceed available resources. We’ve shifted from generating simple lab results to the need for increasingly complex panels, molecular workflows, and derivative data, combined with downstream uses of that data, such as AI, where you need a specific format. Michigan is an exception to the rule in that we have retained ample technical resources to build such compound panels and interfaces. You have a growing queue of asks to the lab and the informatics group supporting the lab for data products, not just for the raw results but also the derivative data products that one generates with such results. These requests can take a long time to be realized or can’t be realized at all.
I see the list of requests growing faster than they’re getting resolved. I don’t know how we’ll solve this other than perhaps having standards for implementation of interfaces. We have HL7, and we know HL7 is almost a standard, but each place implements it slightly differently. Australians and New Zealanders have locked it down; I wish we could do that. The need and opportunity here is having one true interface implementation standard so we don’t have to do a complex dance of unit, load, and integration testing for every new lab or lab panel result requested, recognizing that the overhead needed to do this task has already become unsustainable.
Alexis Carter, can you comment on the dilemmas labs are facing?
Alexis B. Carter, MD, physician informaticist, pathology and laboratory medicine, Children’s Healthcare of Atlanta: I agree that resources are a constant challenge, and everything Ul said about HL7 is true.
As a molecular pathologist, I deal with molecular data a lot, and we have big challenges. Some syntactic standards are not complete. The HL7 genomic standard has two different standards but they are suboptimal for our needs. Clinicians ask for variant data as discrete data, but they want to use it in a way that divorces it from the interpretation of the pathologist or geneticist for germline testing. This is problematic because variants don’t exist in a vacuum and can mean different things depending on the context, the tissue they come from, whether or not the patient has a somatic overgrowth syndrome.
Suren Avunjian, you offer and implement customized reporting in your system. Are you using that utility to help solve some of the problems mentioned here?
Suren Avunjian, co-founder and CEO, LigoLab Information Systems: Yes. We’re addressing these challenges through the LigoLab platform, which offers configurable reporting capabilities. It allows us to break down rich text into ASCII text and map it to HL7 standards. If the EHRs on the receiving end can accept discrete data elements—which is relatively uncommon, especially in pathology—we can provide these elements accordingly. However, many EHRs still prefer to receive data as a single blob of information. For those that can handle discrete elements, such as ModMed, we’ve built this functionality directly into our LIS to facilitate seamless integration.
We’re seeing a growing trend toward cloud-based LISs and EHRs. This shift enables us to connect once through a single channel, and from there the system routes data to the respective customers who use it. This simplifies the process and provides customizable reporting options to clinicians. We’re leveraging the tools and technologies at our disposal to bridge the gap between LIS and EHR systems. Even where budgets are tight and staff resources are limited, we’re finding ways to simplify reporting and data integration.

Joe Nollar, tell us about the diverse needs for displaying lab data in the EHR.
Joe Nollar, associate vice president of product development, XiFin: The lack of a standard is the issue. HL7 is the nonstandard standard. Costs continue to rise for integrations and integration efforts, and the demand for integrations continues to grow. A true standard would help lower costs, increase the value of the data, and improve the quality of the data across platforms.
The ability to provide the discrete data as well as a Base64-encoded PDF should be part of that standard for every interface so that the data is in every system and EHR, as opposed to sending a PDF or a blob of text or more simplified data sets. The sacrifice for us as vendors is we don’t make as much revenue on interfacing, but that’s okay. We’ll do more integrations more successfully and it’s a benefit to patient care and the industry overall.
Promise Okeke, how do you view standards in the context of the demands we’re outlining?
Promise Okeke, MBA, CEO, NovoPath: While there are HL7 standards, and we evangelize certain best practices, we are still far from a time when every physician wants to see the results the same way. Pathologists do things differently even within the same practice, so we spend more time thinking about how do we help pathologists achieve their goals if we assume there is no set-in-stone practice everyone will stick to. In our product, we’ve enabled the client to serve themselves. We give the pathologist the capability in the software to choose the content and how they want to show the data to the clinician instead of having to contact us directly. If our customers are empowered to structure their data themselves, we believe it will lead to faster care delivery.

Chris Malek, what is your experience with customers and potential customers as they face the dilemmas we’re discussing?
Chris Malek, MLS(ASCP), associate director of product management, Orchard Software: All labs have slightly different workflows, even if they’re achieving the same outcome. From the LIS’ perspective it’s important to be able to parse it into as discrete pieces of data as possible so it can be properly coded and passed along in a flexible manner to any upstream system, whether that’s a rich text PDF or discrete in HL7 or even nondiscrete in HL7. It’s important to be flexible so as those standards come to a point, as is hoped, it can be supported.
We’re hearing a lot about the use of APIs [application programming interfaces]. A challenge with that is it’s usually specific to a single vendor; it’s not as universal.
There’s yet another intermediary between clinicians and the EHR. Ancillary staff, nurse practitioners, clerks, and others are increasingly viewing clinical results in the EHR, and some clinicians are not viewing the display of either the LIS or EHR. Nick Trentadue, what’s your experience with getting more cooks in the kitchen?
Nick Trentadue, VP, laboratory and diagnostics, Epic: At Epic we always try to support standards, and we continue to support HL7. From the EHR and lab side, we want to be agnostic to support any vendor a customer wants to use or to receive data from any other system. Our labs report that on average an integration costs $50,000 to $100,000 and takes about six months. So when you talk about onboarding new customers and clients, it’s a huge lift.
I agree about standards. We have Beaker customers who have individual case types by pathologist, so there’s a lot of variation. We’ve decided if standards aren’t going to get there, we’ll do it ourselves. We launched our Aura platform a couple of years ago. It has helped Epic customers and labs standardize those connections, sending discrete variants, VCF [variant call format] files, and even PDFs to try to make it easier to see the right things. Until we get to a spot like the Netherlands or Denmark, where we have codified, succinct, standardized reports, all vendors will have to continue to take the same strategy to be as open as possible to take in that data.

Alexis Carter, what are your thoughts as you listen to the discussion so far?
Dr. Carter (Children’s Healthcare of Atlanta): At Children’s, we do a lot of usability testing before we bring things up and live, and we’ve caught important things. Part of the issue is how our laboratory data is displayed in EHRs and getting parsed, potentially moved around, and put on different dashboards or in different amalgamations or integrated reports. Involving pathologists and other professional laboratorians in these discussions is critical because I’ve seen situations where clinicians wanted things displayed a certain way and it made the data look different than it was. We have to be careful about that.
It’s the same as a laboratory information system vendor trying to generate a report. If you’re generating a lot of discrete data in your lab system and it’s being used to generate a PDF report or a PDF report on top of discrete data in HL7 in the background, you want to make sure the data is represented accurately and understood by clinicians, particularly as the data gets more complicated. Artificial intelligence is going to put pressure on our existing systems in ways many of us can’t yet imagine.
Medicine is a village. The way I look at something may be different than how my clinician colleagues look at it. It’s important we start building in a lot of checking with different stakeholders about what things look like and how best to interpret them, and caution EHR vendors that you could create safety issues if you take this data as discrete data and change it in ways you think may be opportune.
Can you estimate the percentage of clinicians at Children’s who adequately understand their laboratory results as presented to them in the EHR or LIS report?
Dr. Carter (Children’s Healthcare of Atlanta): They understand general, regular clinical laboratory results very well. Often they’ll be the first to tell you when a result differs significantly from the patient’s prior results, because although we have delta checks running, it doesn’t go beyond 30 days. Specimen misidentification is still an issue at the point of collection for places that don’t use it.
If you’re talking about more complicated data, it’s tough. Multiple papers in the literature show that some family practice or general pediatrician colleagues, for example, struggle with what to do with the information in a complicated genetics report. It’s just not part of their general wheelhouse. As medicine gets more complicated, and it is, we need to present data in ways that make it easy for them to understand and hard for them to come up with an inaccurate conclusion. That’s not easy to do with complex data.
Ul Balis, are we putting an unfair burden on informaticians, laboratories and systems, and vendors to solve a problem that is almost in human nature today, given the diversity of clinicians and their knowledge?
Dr. Balis (Michigan Medicine): Yes, but it points to a more fundamental underlying problem. Alexis touched on this in that it takes a village. By virtue of generating more and more complex data, we’re seeing interdependencies where the clinical lab needs to work in partnership with the providers who are ordering these tests to close the loop on several things. First is whether or not the information was delivered. It used to be the case that when you fired it off in your interface, you were done. The real standard is: Has it been reviewed by the clinician and, second, has it been understood? This gets compounded by the reality that we’re now generating derivative data products—AI solutions that take basic atomic data and make extrapolations about prognosis and diagnosis from the primary data.
This is typified by a problem we’re facing in Michigan—dependencies. We generate primary data that go into more and more complex subsequent AI products that use primary data to make a derived extrapolation. If we change the primary format of that data, it breaks downstream rules, best-practice advisories, if you’re talking in Epic parlance, or other types of auto-triggering or autoverification processes in the EHR. We’ve realized there’s a missing element that is an opportunity for improvement—a global enterprise dependency map—so that we know upstream when we change something in the lab in terms of downstream formatting or reporting. Last week, as an example, we changed the testosterone units from deciliters to milliliters or vice versa, and it broke a bunch of downstream rules, not because the result number changed but because the format of the reference interval, and specifically the units, changed. We don’t have a dependency map. There’s an opportunity for closer work between the providers of these primary lab results, the lab sections, and hospital-based IT groups that are stewards of the EHR so we have a global dependency map.
By working as a village, there are opportunities to make the continuity of data interpretation smoother and better, and in tandem create closed-loop systems that tell the LIS when the EHR has closed the loop with the clinician. At present, we don’t get feedback from the EHR that a clinician has viewed and understood a complex result. It’s fire and forget. The next level, going from a transactional model to a relational model, is understanding when clinicians have consumed the data and have acted upon it accordingly. Without that enhancement, loss to follow-up is a real possibility.

Promise Okeke, what do people do in less complex, less rich environments where they don’t have expertise in informatics or in the EHR they’re using, as compared with the University of Michigan, for example?
Promise Okeke (NovoPath): A dependency map could help streamline data flow across every laboratory. In situations where the lab might be less complex than Michigan’s, the burden falls on LIS vendors to help laboratories troubleshoot and understand the dependency maps and to set them up for success. In a perfect world, I agree you’d have this relational model that is bidirectional, where a change in the application layer also alerts the source layer and makes the necessary changes. AI will play a role in reducing complexity and automating this process, which is why we are investing heavily in creating a platform for our clients to automate data flow and governance. We have a data engine where clients can create rules to achieve certain business outcomes in the application. For labs that aren’t as resource rich as the university hospitals or health systems, we play the role of the LIS and a system administrator. Since they don’t have a big technology team, they lean on us to create the data map, automate certain data flows, and ensure proper data integrity and governance.
Suren Avunjian, there’s a job to be done that vendors didn’t anticipate having to do when they got into this business. Is that right?
Suren Avunjian (LigoLab): Absolutely. We’ve come to embrace that reality in the LIS and ERP-like [enterprise resource planning] software industry. Service has become a significant part of what we do because our platform is deep in functionality and mission critical to the organizations we serve. When we implement our systems for customers, we don’t expect them to have sophisticated informaticians as their LIS managers. It’s more important that they know their operation inside out and feel comfortable explaining their vision on how they want to improve their processes. If someone is adept at using tools like Microsoft Office and understands their lab’s workflow well, they can be an excellent LIS manager. We’ve designed our software to be as intuitive as possible to make this a reality. We also recognize there’s a substantial learning curve. People need to consume information in various ways, through comprehensive, searchable manuals linked in our platform directly to relevant help pages, video trainings, role-based guides, and so on. Training and education are big components of our service.
When a lab wants to introduce something new, like rules, automation, configurations, or additional data fields, it’s crucial not to create these elements haphazardly. We’ve developed a layer we call a configuration descriptor, which allows you to describe and link your configurations explicitly, saying, for instance, “I am configuring a new workflow,” and all the configurations can be tied to this entity. The system helps you relate all the rules and data fields involved. If you need to make a change, you can see all the dependencies in a centralized place.
By focusing on high-level maintenance from the outset, we help ensure that even after 10 years of operation, the back end doesn’t become a tangled mess.

Nick Trentadue, you’re installing, you’re training, you’ve seen a lot of systems—are these themes and topics familiar to you?
Nick Trentadue (Epic): Yes. We strive from day one to look at the integrated workflow. At Children’s Healthcare of Atlanta, for example, the lab and clinicians are on the same system. When we look for dependency maps, we have utilities and tools within Epic that will say, “Here’s everywhere you’re using this serum sodium value.” If you don’t have the same standards, your dependencies are different. At Michigan, where we’re interfacing lab into Epic, your data definitions are only as good as what is defined in that specification. So when something’s coming into Epic and you change the unit, we don’t always know you did that unless you create a new data element and update the dependencies. Those conversations are interrelated, and we want to make sure we’re looking at the whole picture from the whole patient experience. For ancillary staff looking at those results, how can we make it as easy as possible for it to go where it needs to go?
The other important point Ul Balis brought up is the communication. This is integrated for our Epic groups. The lab can look at a specimen and see every person who’s reviewed or seen the result. That’s another good callout for when you don’t have the luxury of being in an integrated system. How can there be standards such that whether you’re in Orchard or LigoLab or using Epic or a different EHR, we can still have the same transparency? Because that’s important to make sure you’re not going to miss a cancer diagnosis.
Joe Nollar, what are some of the major questions new customers ask you? How have those questions changed in the past three to five years?
Joe Nollar (XiFin): I want to make a point on a comment Alexis made about the complexity and volume of data coming in, especially with the new molecular and AI tests being released. We’ve always had case summary reporting, especially for complex hematology cases where you have FISH, flow, cytogenetics, and molecular results, et cetera, that need to be combined into a single report with a comprehensive assessment for the oncologist of what all the diagnostics mean. We’re seeing more demand for case summary reporting to help make sense of complex data and guide practitioners in their treatments. That’s welcoming to see in our business.
The questions new customers ask vary from lab to lab, because often they’re going from a system they’ve been on for 10-plus years and are making the leap from an on-premises system to a cloud-based solution. That can be perceived as a big leap for them, even though they know the benefits. We continually reinforce a cloud-based system’s ability to offer scalability, flexibility, and better security and cost efficiency than an on-prem system. It’s how we can improve their connectivity and user experience. Cloud-based solutions allow you to address unique concerns from lab to lab. It’s one of the things in the sales process that labs like to hear. Then it’s a question of executing and managing that change process.
We know there’s a good number of legacy systems in our field, whether LIS or AP or others. Chris Malek, do people who are leaving the on-premises world for the cloud have some anxiety as they speak to you?
Chris Malek (Orchard): Many clients have been excited about going to the cloud because it’s more secure and reliable and a cost savings on the IT side. We’re seeing a huge growth in cloud deployments. It’s been great for speeding up the implementation as well. We can develop product-specific routines and monitoring to make sure the system runs as efficiently as possible.
Clients are also asking about systems that support growth—an LIS that has many modules they can expand to in the future. If I’m a point-of-care customer, I might add clinical testing to my system in the future and want to know that I can grow with a single system.
Alexis Carter, what particular complexity does point-of-care testing demand of your IT infrastructure?
Dr. Carter (Children’s Healthcare of Atlanta): The biggest complicating IT issue is that point-of-care tests order themselves and can be performed at any time. Billing can sometimes be complicated as to whether you’re going to be able to bill for it at all, especially for inpatients. And also where the devices are, maintaining the devices, being able to do break/fix. There aren’t many IT issues because most of the point-of-care tests we run are fairly simple.
I’m hearing more requests to import results into the EHR for patients who are monitoring themselves at home. It can be exceedingly helpful for clinicians to see this because they know the patient and can generally get a better idea about how the patient is managing their disease at home between the same tests performed in a CLIA-licensed laboratory. How do we effectively label lab results that were not generated in a CLIA-licensed laboratory such that it is clear they were generated by the patient using an at-home device? There could be hundreds of different devices for the same thing—imagine glucose meters you get at a pharmacy. How do we label those in a way that’s clear to the clinicians where the result came from?
Ul Balis, not only are we talking about patient self-testing, but the diagnostics industry is in love with waived testing. And it’s likely we’ll see a tsunami of waived testing approvals. How will you deal with those at Michigan?
Dr. Balis (Michigan Medicine): I suspect it will involve a use of flags on the results so we can categorize waived testing versus low- and high-complexity testing. The elephant in the room is what we’re going to do with the recent FDA ruling for LDTs, which compels an entirely new layer of documentation in terms of test development provenance. It’s being contested, but for the time being it remains active in the Code of Federal Regulations. There will probably be a layer of increased granularity in the HL7 message, or at least in a Z segment, or some other type of mechanism for communicating the class of test so it can be fully represented in the EHR. More importantly, for longitudinal reporting you wouldn’t necessarily want to mix one class of result, which could be a CLIA-validated result—for example, long-term A1c tracking—versus unreliable or less reliable home testing results. Labs and LIS implementation teams are beginning to wrestle with this. How do you create the needed granularity so that a clinician can know the provenance and trustworthiness of an individual result?
We’re talking about a caste system for lab tests in some sense of the word, right?
Dr. Balis (Michigan Medicine): Yes, we’re looking at a caste system and provenance. A CLIA-certified lab carries with it the imprimatur that the result is trustworthy. When you have no idea about the level of rigor that comes with self-testing, a concern for trust is created. All through time the lab has assiduously followed the mantra as espoused by the FDA: safe and effective. If those underpinnings as our main edifice of quality are eroded, that’s a disservice. When you throw AI tests into the mix, additional questions arise: Who developed them? What’s the level of validation? Do we have the same need for flagging them? Are these waived-complexity AI tests from the outside, which were consumed by the EHR simply because the patient put them in their portal? Or are these FDA-grade analytics, AI and computational pipelines, and machine-learning–based tests? We’re going to need a way to represent the level of quality and trust we have.
For example, we’ve had isolated requests for patients to upload their 23andMe data into the portal so they’re available as a convenience for genetic counselors and the team carrying out genetic testing. Some outside sources of data have a lower level of quality than somatic and germline testing as performed by a CLIA-certified lab. Yet we’re probably going to arrive at a point where the EHR is a mix of self-generated tests on the part of patients from the direct testing industry, blended with results from CLIA labs. We’re going to need a way to co-locate both data sets and keep it clear which are high fidelity and which are potentially suspect.
Nick Trentadue, it sounds like we’re going to need the equivalent of a Rosetta stone within our EHR systems. An experienced clinician working within his or her domain of expertise does this with some ease, but for many other people it would not be easy to filter and tier these things.
Nick Trentadue (Epic): I agree. As we’ve started to embed large language models and AI into the clinical workflows in the EHR, there are clear visualizations—what was derived by a doctor or a nurse versus what was surfaced to the clinician by AI. People view things differently and there needs to be documentation in terms of what this was.

Suren Avunjian, what’s your view on getting lab results from different sources—waived tests, patient self-testing—as we see it coming down?
Suren Avunjian (LigoLab): To effectively handle this influx of diverse data, we’ve adopted a multifaceted approach. Capturing the methodology behind each test is crucial. By documenting the techniques, instruments, and procedures used, we provide context that helps clinicians interpret the results accurately. This includes noting whether a test was performed in a certified lab, at a clinic using point-of-care devices, or by a patient using home testing kits. We implement a robust flagging system that assigns levels of acceptance or validation to the results based on their source and reliability. For example, a result from a high-complexity laboratory might be automatically accepted while one from a patient self-test might require additional verification or be flagged for clinician review.
Building individual integrations for each new device or testing platform is neither efficient nor scalable. Aggregators can collect data from multiple sources—point-of-care devices, home testing kits, wearables, or lab information systems—and consolidate it into a single, standardized format. By integrating with these aggregators, we, as vendors, don’t have to develop custom interfaces for each data source. Instead, we focus on a single integration point that provides access to a wide array of data. This aggregated information can be seamlessly integrated into tools that are accessible directly from the EHR or LIS. Clinicians can launch these tools within their workflows, allowing them to view and interpret the consolidated data without needing to navigate multiple systems.
Ul Balis and Alexis Carter, are your jobs easier or more difficult than they were five years ago?
Dr. Balis (Michigan Medicine): It’s richer, and it’s a good thing. The overall environment is becoming more complex because we’re operating at a much higher level. The good news is that the tools available to us to operate in that more complex environment are also maturing.
What I do at work is just as stimulating, if not more. I’ve managed through 35 years in this field not to become cynical. It’s a hard but rewarding job. What we do as pathologists and pathology informaticists evolves. To say AI will replace us—that’s silly. However, what we do in our day-to-day activities will change because the practice of pathology and lab medicine itself changes.
Dr. Carter (Children’s Healthcare of Atlanta): The job is in some ways more complicated. It’s a lot easier to get your data with the tools. I’m excited about the possibility of AI being able to do a level of quality control and quality assurance on the information in the record. I’ve heard from colleagues and seen for myself at some organizations how someone puts one wrong piece of data in a note and it gets copied forward 100 times into other notes until people don’t realize it’s not true. There’s a lot of possibility with AI in detecting these anomalies and intervening.
I’m worried because there’s a push for interoperability but without adequate standards for it. HL7 is a syntactic interoperability standard. It’s getting a piece of data moved from point A to point B into the right field. But there’s semantic interoperability, which means that the clinician understood it the way it was intended to be understood when it left the lab system and into the EHR. There are prominent standards, for example, that the federal government is pushing. There have been several papers, including one from colleagues through the CAP, that showed that some existing standards are not to be used by themselves to chart a laboratory result into a result spreadsheet [Luu HS, et al. JAMIA Open. 2024;7(2):ooae032]. There’s misunderstanding about that. I’m concerned about any move forward that requires us to use standards that aren’t safe for that purpose.