Critical value repeats—redundancy, necessity?

 

CAP Today

 

 

 

December 2010
Feature Story

Anne Paxton

With patient lives in the balance, constant tradeoffs between ensuring accuracy and producing results quickly are a fact of life in the laboratory. But a new Q-Probes study suggests that many laboratories could safely shave turnaround time where it counts by eliminating at least one common QA practice: repeat testing of critical results before they are reported to clinicians.

Given the sophistication of the chemistry and hematology instruments now in use, repeat testing “may be an unnecessary step that delays the reporting of critical test results” without adding value to their quality or accuracy, according to the authors of the study, titled “Utility of Repeat Testing of Critical Values.” Laboratories “should assess their own instruments’ reproducibility and their institution’s tolerance for degree and frequency of repeat discrepancies to decide whether routine repeat an­alysis of samples with critical values can be discontinued,” the authors conclude.

“When the practice of reporting critical values began in 1979, laboratories always did the test in duplicate. That was because lab precision wasn’t as good as it is today,” says Peter J. Howanitz, MD, director of clinical laboratories at State University Hospital, Brooklyn, NY, vice chair of the CAP Quality Practices Committee, and one of the study’s authors. “The rationale of repeating critical results is that if this is something that’s life-threatening for the patient, then the lab has to be absolutely sure that result is accurate. We did a lot of tests in the 1970s and 1980s in duplicate, and since then, the practice of repeating has carried over.”

Today, sharply improved laboratory information systems, as well as ultra-sensitive level sensors and clot detectors on instruments, have helped make repeat testing more of a redundancy than a necessity. “Over the last 15 years, we began to stop doing the duplicate testing on some tests such as blood gases in my lab. We don’t repeat tests in chemistry, and we don’t repeat them in hematology—except if it is someone who has leukemia or lymphoma, and it’s the first time we’ve seen them, so we want to be absolutely sure of the result,” Dr. Howanitz says. “At some point, you become so sure of the value, you have to ask: Am I in fact harming the patient by not immediately getting these results back to the clinician?”

In this Q-Probes, the authors studied whether, when laboratories get a critical value for potassium, glucose, white blood cells, or platelet counts, they repeat the test. The 86 participants prospectively reviewed critical test results until, for each of the four test types, 40 initial critical results that were subsequently repeated were identified. They also collected data on the number of minutes that repeat testing delayed the reporting of results.

“Any delay in reporting of critical values can be significant in management of any patient,” says study co-author Donald S. Karcher, MD, chair of pathology at George Washington University Medical Center, Washington, DC, and a member of the Quality Practices Committee. “Given that—combined with the fact that analyzers now have very high levels of reproducibility—our hypothesis was if a value comes back as critical, it’s highly unlikely it’s going to change significantly. And the data really turned out to show what we would have predicted.”

Certainly with chemistry analyzers, Dr. Karcher says, “the precision has been very high for a long time. Hematology instruments are very good, but may be a little behind the curve in comparison to chemistry. It’s the nature of the analytes and the technology being used, but the current generation of instruments is very precise.” The reproducibility of results did not vary significantly by analyzer, this study found; therefore, the data collected “should be widely applicable,” the authors wrote in their commentary.

Despite the importance of precision in the clinical laboratory, Dr. Howanitz says, “the real issue is that now our determinations are far more precise than what’s required for good medical care.” Calcium is the one exception to that general rule. “Someone might argue that that’s one test we should repeat because it’s not as precise as it should be. And that’s one reason we didn’t include calcium in this study.” Potassium, on the other hand, was considered important to include because a patient with a very high critical potassium would be put into the ICU and given four or five different therapies to reduce the level.

Study co-author Christopher M. Leh­man, MD, director of hospital laboratories at the University of Utah Health Sciences Center, Salt Lake City, and a member of the Quality Practices Committee, got involved in assessing the value of repeat testing when he did a survey in association with ARUP Laboratories in 2006. “We asked their clients how many routinely repeated critical values. And 70 percent to 75 percent said they ‘always’ repeated for hematology, chemistry, and coagulation, and 23 to 29 percent said ‘sometimes.’”

At George Washington University, Dr. Karcher says, “we discontinued the practice of repeating sometime in the mid ’90s.” According to a recent article, after Kaiser Foundation Hospital in San Francisco in 2008 retired its policy of repeating critical results, it was able to improve the lab’s efficiency and turnaround time, and a six-month monitoring period showed no reportable patient care issues as a result of implementing the change, Dr. Lehman notes (Chima HS, et al. Lab Medicine. 2009; 40(8):453–457).

But this Q-Probes showed that 60.8 percent of laboratories still always repeat tests of chemistry critical values, and 52.6 percent always repeat critical values in hematology.

A laboratory’s size should have no effect on whether such a policy is followed, Dr. Karcher says. George Washington University is an urban teaching hospital with a level I trauma center. “Our situation shouldn’t be different,” he says. “There should be good reproducibility as well in smaller labs and smaller hospitals. These are not esoteric analytes but very basic bread and butter assays, with values that would be considered critical in any laboratory.”

In this Q-Probes study, participants who said critical values were “sometimes” repeated gave a range of answers to the question of what would rule out a repeat analysis. For chemistry, 80.6 percent said they would not repeat if a prior critical result had already been reported. For hematology, 86.5 percent said they would not repeat in such a case. Roughly a third said repeats would not be performed if there was “not a delta check result for patient,” and roughly a quarter said they might rule out repeats “due to patient type, clinical service, or ward location.”

Laboratories that reported they routinely repeated all chemistry values frequently have instruments programmed to repeat once a critical value is detected. “In 72.3 percent of the laboratories, these are automated instruments,” Dr. Karcher points out. “The analyte is still loaded on the instrument and they can go back and aspirate another aliquot of that specimen and repeat it automatically.” In chemistry, if the initial analysis was performed on an aliquot, 37 percent of labs said they always performed repeat analysis on the primary sample, while in hematology, 52.2 percent said they always did so.

The repeat analyses happen fast, Dr. Karcher notes, and in the study those labs did not report significant delays in obtaining their repeat values. Theoretically, then, “one could argue we should repeat them all and let the instrument tell you what the value is because there’s almost no delay.” With hematology instruments, however, it’s uncommon to have automatic repeats, Dr. Karcher notes. “They often have high-end track systems, but not as many people use auto­verification, so somebody’s looking at the numbers. There’s going to be a built-in delay because the operator of the instrument has to notice the value and make a decision to repeat it.”

Most laboratories (91.1 percent) have a backup analyzer for chemistry, and 93.6 percent have one for hematology, the study found, but the majority perform repeat testing on the same analyzer that produced the initial results (79.2 percent in chemistry and 68.6 percent in hematology). Says Dr. Howanitz: “One of the reasons to repeat the test on another analyzer is that with the first, for whatever reason, there may be an error in recalibration or the reagents may be wrong or out of date. But if they aren’t using the backup, it says to me that they are really sure the analyzer is doing the measurement correctly. Because if they thought they had a problem with the analyzer, they would have chosen the other one.”

It may be that in some cases the backup instrument is not convenient, Dr. Karcher conjectures. “If they have a core lab separate from the rest of the lab, as many labs still do, there’s a good chance the backup instrument is not in the core area, so there would be a built-in delay, and they just use the backup instrument only when the primary instrument is down.”

Of the four analytes targeted, this Q-Probes study’s data show higher variation in repeat results for platelet counts, which do have more room for error, Dr. Karcher says. The study’s authors believe that operator error and unrealistically stringent precision expectations for this analyte account for many of the significant differences in the repeat testing results versus the initial results. “But as best we could tell,” Dr. Karcher says, “there were two labs that accounted for most of the outliers and a lot of these really drastic changes were from these two participants, who had counts that were much higher—by a factor of 10 in some cases. And the theory is that they were not mixing their samples properly. They probably did not have their CBC specimens on a rocker and specimens weren’t fully mixed when they put them on the analyzer, and by the time they repeated the test, the platelets were more evenly distributed.”

“Samples should be mixed before they are run,” Dr. Lehman points out, “because blood cells can settle to the bottom of the tube. You could also have platelet clumps in the specimen, and that would decrease the count initially. Their analyzer might give them a clumped platelet flag, and there are some laboratories that will actually vortex the samples to disaggregate them, and will get a higher count the second time,” though he and his University of Utah colleagues do not recommend this.

Laboratories may be expecting unrealistic precision from their he­matology analyzers, the Q-Probes authors suggest. “In general, I think you’ll find it’s a really low percent of samples that have discrepant results,” Dr. Lehman says. “The problem with platelet counting is when you get down to really low levels, debris in the sample becomes a significant problem. There are ruptured cells, membranes, and other particulates floating around in the specimen.”

“So the interference is highly variable depending on the patient and quality of the samples. If you’re counting 100,000 platelets, it doesn’t amount to anything. But if you’re counting 10, then it becomes a significant proportion of the background. And the analyzers are only so good at sorting out real platelets from the debris.”

In this study, laboratories reported a variety of practices after the second result is generated. If the second result is not significantly different, a majority of labs (60.5 percent for chemistry and 62.2 percent for hematology) report the first result, while most of the rest report the second. None said they reported an average of the two results.

However, it appears that most labs give more weight to the repeat test. If the repeat value is significantly different but no longer classified as a critical value, a large majority (86.1 percent for chemistry, 84.5 percent for hematology) do not report a critical value to the clinician.

If the repeat value differs significantly from the initial value, a majority of labs (74.3 percent for chemistry, 66.7 percent for hematology) run the test again. If that is not their practice, several labs said they would re-collect the specimen, or “troubleshoot,” while a few said they would run QC and repeat the test on a second specimen, review previous results, or check the sample for fibrin or clots.

“Our recommendation is that people look at their repeat values and see what the incidence is of clear discrepancies, then investigate what the reasons were,” Dr. Lehman says. “Then you have to decide: Can those be fixed, and can they find other process improvements to address them? Were there cues they were missing that would have eliminated the discrepancies? Then the medical director would have to decide what’s the clinical significance of the difference. Would a clinician have acted on the result, or would it have been obvious something was wrong?”

If laboratories are running the test twice, they’re usually reporting one number or the other, Dr. Lehman says, “since it’s easier on the LIS rather than reporting an average, which is probably better as an estimate. Most, if they get a difference they consider significant, are running the test a third time. Then they’re probably picking the two results that are closest together.”

For about 14 percent to 15 percent of the study’s participants, critical values would still be reported to clinicians even if the result of the repeat test was not critical. Is that what should happen? In Dr. Karcher’s view, it’s hard to make a generalized statement. “It’s not necessarily supported by the data, but if I were in that situation I might still report a potential critical value to the clinician, but indicate that on repeat testing the value fell outside the critical range.” Dr. Karcher compares the situation to delta checks: “Our policy there is to communicate directly with clinicians as soon as possible, and to see if the significantly different value we found matches their clinical impression, or if there’s a possibility of mislabeling or inappropriate collection.”

If a test has a very stringent critical value, then it becomes less important to quickly get that result back to the clinician, Dr. Howanitz points out. “If you set a critical value of 6.0 for potassium and do it on a patient and get 6.1, then 5.9, what’s the significance there? You can say 6.0 is critical. And it certainly can be life-threatening, but it’s not as critical as a value of 7 or 8 or 9.”

Laboratories did report some delayed results, which can have a potentially significant impact on clinical practice. For 25 percent of laboratories, the median reporting delay was at least 10 to 14 minutes, and for 10 percent of the laboratories, the median delay was at least 17 to 21 minutes. In the past year, 20 percent of laboratories said there had been incidences in which repeat testing resulted in a reporting delay that the hospital’s clinicians believed adversely affected patient care.

Overall, the study authors believe the delay that repeat testing causes is relatively minor, Dr. Karcher says. “However, in 25 percent of labs it’s not so minor. It may be approaching a quarter of an hour, and in 10 percent of labs the median was approximating 20 minutes. That can become pretty significant when you have a patient in the ICU who is having electrolyte issues and a critically low or high potassium, and you’re delaying that information by close to half an hour.”

The study did not ask labs what degree of difference between test results they would call “significant,” and laboratories tend not to have a written policy defining a significant difference. Only 22.1 percent of chemistry sections and 26.7 percent of hematology sections reported a written policy.

More than 90 percent of laboratories without a written policy say they leave the interpretation of “significant difference” to the person verifying results. “For the minority of laboratories that defined a significantly different repeat result, a difference of 10 percent was the most common criterion used for both hematology and chemistry measurements,” the authors wrote.

There doesn’t appear to be much standardization in terms of what constitutes a significant difference, Dr. Lehman says. “It’s not defined in a procedure manual in most labs; most of them left it to the technologists, who would be deciding based on their judgment of how the analyzers work and what the critical values of the test are. But if you’re not giving direction, then you’re going to get variations in practice. So then, if the technologists are making different decisions about what’s significant, the question is, What are you really providing clinicians?”

If this study is repeated, the authors will ask next time what the participants consider a significant change to be, Dr. Howanitz says. “But I think laboratorians would really be shocked if we asked clinicians what they think. Because the same changes we think are statistically significant are really clinically insignificant.”

Clinicians may not know about their laboratory’s policy on repeat testing. Says Dr. Karcher, “I don’t know if many clinicians, frankly, are aware that a lab is repeating critical values, unless the laboratory director has chosen to specifically say, ‘Oh, by the way, whenever you get a critical value it’s already been repeated and therefore you can rely on it.’”

It’s probably rare that a delay from repeating a test when a critical result is produced causes harm to patients, Dr. Lehman says. “But our emergency department has been quizzing us on this issue and actually would like us to not repeat routinely, because it slows down their decisionmaking.”

Caution is advised in interpreting the data from this Q-Probes study, Dr. Lehman says. “On any of these studies where we asked questions, we don’t know who answered them. The person filling out the survey probably didn’t sit down with the whole lab and say ‘What are we doing?’ My impression is it’s the opinion of that person filling out the survey. So whether or not it’s a completely accurate picture of everything that’s happening in the laboratory, we don’t know.”

Nevertheless, the study has a clear take-home message, Dr. Karcher says: Laboratories probably don’t have to repeat critical values. “You’re potentially delaying significantly the reporting of results, wasting resources, and not adding value to the proposition.” As a secondary issue, he advises labs to take a critical look at platelet counts. “Make sure you’re using proper technique and that you assess significant change based on a realistic reproducibility of the platelet count because it does not have the same precision as most chemistry analytes.”

Perhaps not every laboratory will want to end its practice of repeat testing, says Dr. Lehman. “It’s up to each institution to decide its tolerance for variance between repeat results and to assess the clinical impact. It’s a discussion to have with your clinicians, whether this is a significant issue for them.” In the case of Dr. Lehman’s laboratory, “It has certainly improved our turnaround times in hematology. And we’re on the verge of implementing for chemistry, where we’re not doing it globally; we’re doing it based on the more common tests that we have critical values on.”

It might take more than one study like this Q-Probes to sway a lot of laboratorians, Dr. Howanitz says, but it’s important that laboratories begin thinking about doing tests a single time and relying on that result. “We have such wonderful instruments these days, we are really very, very sure of their accuracy. If labs chose not to do the test in duplicate, it would improve patient care—because time is of the essence.”


Anne Paxton is a writer in Seattle.