Home >> ALL ISSUES >> 2022 Issues >> Autoverification: lessons drawn from a core lab

Autoverification: lessons drawn from a core lab

image_pdfCreate PDF

The influence of hemolysis on troponin T was the exact opposite but just as impressive. Hemolysis causes troponin T measurement to be falsely decreased. The lab was getting false decreases based on previous H-index thresholds. The lab derived different thresholds that allow a slightly higher amount when results are low. “And then a fairly stringent H-index limit around the decision points where the reference interval lies,” she said, “and where we expect to see fairly sensitive changing patterns in early MI.”

What was the impact of these changes? For AST, rejection/re-collection rates dropped from 8.1 percent to 1.5 percent (an 82 percent reduction). That eliminated 708 specimen re-collections over roughly one year. For direct bilirubin, the lab calculated that rejection/re-collection rates dropped from seven percent to 2.7 percent—a 62 percent reduction—with the new H-index threshold, which eliminated re-collection of 306 specimens.

Autoverification testing strategies, like traveling with teenagers, have their low points as well as their rewards.

A good place to start, Dr. Block told her audience, is with CAP accreditation program checklist requirement GEN.43875, which addresses validation of the autoverification process initially and whenever there’s a change to the system that could affect the autoverification logic.

“So it’s important to have in mind what the middleware support model is for the laboratory or labs that are using this system,” she said. “We usually think in terms of IT, which supports the application from a technical perspective, versus the lab expert [who] not only uses the system but can then help assist in testing and verifying that it’s functioning properly.”

Beware of gray areas, she said. “In our case, we have a lab information system technical specialist [who] helps with configuration.” The specialist also “serves as an arbiter between IT and the lab to help support configuration build and communicate other change requests.”

Mayo Clinic’s central clinical lab “has a long history of using middleware software,” she said. A medical technologist built the first version in about 2000, and it evolved as instruments changed and automation software became more commonplace. The lab recently modified its current middleware rule build to accommodate a new automation and instrument upgrade; Dr. Block and colleagues rely on Infinity to help support the automation, and this added an extra layer of complexity to the workflow. Moreover, since a multiphase project was needed to replace instruments and automation in existing space, the project had many moving parts and interim states of partial automation. “We also layered on top of that a personnel reorganization and this other thing called COVID-19,” she continued, “which meant there was much less on-site support and subsequent staff turnover. One or all of these challenges may have distracted us from making a robust testing plan.”

Nevertheless, she reported, “We did it—we did well, considering all these obstacles. But eventually a very savvy tech in the laboratory noted that a test that should hold when hemolysis is present didn’t hold as expected. After some forensic investigation, ultimately we discovered the cause was a different hemolysis message sent from a new instrument that wasn’t updated within the middleware to hold.” The lab was relying on existing rules that under normal circumstances would work. “However, the new instrument didn’t utilize the same message, so it went unrecognized for three tests on the test menu. Ultimately we did perform root-cause analysis to determine what we could do to prevent this moving forward.”

Having a testing scheme is crucial, said Dr. Block, who noted there are two basic ways that middleware logic gets tested.

One, the so-called dry-testing approach, involves a simulated environment that lets the lab test virtually all scenarios in a fairly robust way, she said. This approach can be facile and comprehensive. It also saves on the cost of reagents and having to test specimens. On the flip side, this method generates massive amounts of files and paperwork. “And you’re blind to the manual inputs that you’re assuming are happening in these various scenarios,” Dr. Block said.

So-called wet testing, on the other hand, is a real-world scenario that involves ordering a test on a test patient, then following the flow of information through the whole system to check the behavior and confirm that the expected outcome has indeed occurred. The instrument flags and nuances are visible with this approach (and thus would have helped the central clinical lab identify its problem, she said), and it “provides a high amount of confidence that you’ve tested that algorithm well.”

But this approach has its own disadvantages. “You can’t test every single scenario, and some can’t even be replicated,” Dr. Block said. “And the cost of reagent and specimens is going to add up tremendously. It could actually be prohibitive.”

In Mayo Clinic’s case, Dr. Block said, “We decided we were missing a very explicit SOP,” one that the technical specialists who support the middleware, as well as the testing lab, IT, and other stakeholders, could follow to produce a testing plan. The plan needed to describe, at a high level, what types of changes might need to be implemented in the middleware and whether it would be amenable to simulation testing, wet testing, or both.

“That was the first step,” Dr. Block said, one that allowed all stakeholders to define their roles and responsibilities more clearly. With this plan, “We usually use the analytical SOP as a source of truth, and then that would be what we use during our downtime scenarios.” The testing is done by the laboratory technical specialist, who signs off along with the medical director. “So the pearl is to make a middleware support plan that identifies the who, what, and which scenarios to apply—wet testing versus dry testing.” (Fig. 2).

This well-defined plan was not the starting place for the central clinical lab, given that new software and instrumentation (including automation) were already in the works. Another hiccup occurred with turnover of some key staff who had built and maintained the existing middleware rules.

Still, neither the new instrumentation nor the data innovations were radically different. The lab also had plenty of vendor support, including training new staff in their jobs. In retrospect, that lulled the lab into a false sense of security, Dr. Block said. “We ultimately did not do as much of the full path wet testing with patient samples as we really needed and could have used.”

That experience brought Dr. Block to her second pearl of wisdom. “Once you have that plan, it’s important to engage the plan, sticking to it. At least don’t abandon it,” she said. If the project runs into hurdles—such as staff departures—the plan can make the handoff a little smoother, with less risk.
Finally, she said, lab leaders need to keep asking their team (and themselves) why they do what they do. The answers—or lack of good ones—can keep labs on the right path. There’s always room for labs to evolve and improve, she said. Autoverification, in other words, should not fall into the trap of running on autopilot.

Karen Titus is CAP TODAY contributing editor and co-managing editor.

CAP TODAY
X