Home >> ALL ISSUES >> 2015 Issues >> Delta checks as safety net: how used, how useful?

Delta checks as safety net: how used, how useful?

image_pdfCreate PDF

Anne Ford

September 2015—There was a time when Michael L. Talbert, MD, didn’t spend much time thinking about delta checks in his laboratory.

“I would periodically look at them, but I didn’t put a whole lot of thought into ranges or into which analytes were most efficient or effective,” says Dr. Talbert, who is chair of pathology at the University of Oklahoma Health Sciences Center and chief of service and medical director of pathology and laboratory services at OU Medical System, Oklahoma City.

Dr. Talbert

Dr. Talbert

That’s changed, thanks to Q-Probes study “Use and Effectiveness of Delta Checks,” the first Q-Probes to examine laboratory practices around delta checks and to provide related benchmarks and recommendations. Dr. Talbert, a member of the CAP Quality Practices Committee, is one of the authors of the study.

“There have been some studies on delta checks,” says coauthor Ron B. Schifman, MD, “but not with this large a group, and they focused mainly on how good delta checks are at detecting testing problems.” Dr. Schifman is chief of diagnostics at the Southern Arizona VA Healthcare System and associate professor in the Department of Pathology at the College of Medicine, University of Arizona, Tucson.

“There hadn’t been any work done on trying to benchmark which analytes are used for delta checks, what the parameters are, what laboratories are doing with regard to how they investigate delta checks, what the outcome was,” Dr. Schifman, vice chair of the Quality Practices Committee, points out. “The goal of this Q-Probes study was to help laboratories look at their delta check practices and use this information for making adjustments—for example, adding or removing analytes, or looking at the maximum number of days used in delta check calculations.”

What the study’s authors found, in a nutshell: First, when there is a delta check, some analytes are more effective at identifying problems. Second, laboratories’ practices surrounding delta checks vary widely. Says Dr. Talbert: “Delta checks are performed on an unexpectedly wide variety of analytes, triggered by a wide range of values.”

Why do these findings matter? Because, the study’s authors say, delta checks are often one of the mainstays of the autoverification process—and because the efficiency and effectiveness with which they’re performed directly affects a laboratory’s workload.

“Most delta checks,” Dr. Schifman says, “require extra work in the laboratory to investigate. That could be as simple as just checking whether the patient had a transfusion or some other clinical reason to have a physiologic change, or it could go all the way to pulling out the previous specimen, retesting it, and checking for mislabeling. You could spend quite a bit of time reexamining specimens. So you want to try to adjust delta check parameters to reduce excess work, but not so much as to affect their effectiveness for identifying testing problems.”

The study examined 6,541 delta checks from 49 laboratories. “This applies primarily to high-volume tests that are done frequently in inpatient settings,” Dr. Schifman notes. “Most delta check calculations, as we suspected, had a maximum time set of five to seven days. You’re probably not going to encounter many outpatients who are tested that frequently.”

Laboratories were asked to collect consecutive delta check alerts for up to 60 days or until 100 events were collected, whichever came first. About 70 percent of all testing episodes in which there was a delta check triggered only one delta check.

The most frequent analyte used for delta checks was mean corpuscular volume, which was used at 45 of the 49 participating laboratories. Also commonly used for delta checks: sodium, hemoglobin, total calcium, potassium, and creatinine. The median laboratory reported using 15 different analytes for delta checks.

As for their method of calculating delta checks, laboratories were nearly evenly split between their use of absolute (52 percent) and percentage (48 percent) differences. That said, laboratories did report using absolute more often than percentage for certain analytes, such as MCV, hemoglobin, potassium, and chloride, whereas they were likelier to use percentage more often for platelets and enzymes.

“The median laboratory had three delta checks triggered per 1,000 tests,” Dr. Schifman reports. “This ranged from eight for laboratories at the 90th percentile and only 0.3 per 1,000 tests at the 10th percentile. Ideally, the best system would produce the fewest number of delta checks per number of tests that need to be investigated, and the largest number of identified problems.”

What happens once a delta check is triggered? Slightly more than 75 percent of the testing episodes that involved delta checks triggered one or more evaluations, usually entailing clinical review or repeat testing. Again, this varied by analyte. “There were some analytes, like uric acid—although it’s not a frequently used delta check analyte—for which delta checks were investigated only about 58 percent of the time,” Dr. Schifman says. “This suggests that these analytes are probably not useful.”

Then, too, laboratories varied in how aggressively they investigated delta checks in general. “For example, there was no action taken in response to a delta check 36 percent or more of the time in facilities ranked above the 75th percentile,” Dr. Schifman says. “In 10 percent of the laboratories, delta checks were not investigated in almost 80 percent or more of cases. Ideally, if you have a delta check, you should always do something.”

Of the delta checks that were investigated, what percent identified a testing problem such as a mislabeled specimen or contamination by fluids? Again, the study reveals variation among analytes. Sodium, potassium, and magnesium had fairly high rates of problem detection (11.9 percent, 14 percent, and 10.4 percent respectively), while creatinine was associated with a much lower rate of 2.5 percent. Some of the hematology parameters, such as hemoglobin and platelet count, came in around the middle, with 4.2 percent and 6.5 percent respectively. “I think for the first time, we have evidence to show which analytes are relatively better or relatively worse at detecting problems,” Dr. Schifman says.

Among the 240 delta check testing problems identified, the most common were interference due to hemolysis, lipemia, or icterus, with IV fluid contamination somewhat less common. Analytical error and other processing or handling errors represented very few of the testing problems. “This gives you an idea of how useful delta checks are for picking up these different types of problems,” Dr. Schifman says. “For example, they’re not as good for picking up mislabeling,” which represented only five percent of testing problems and 0.3 percent of all testing episodes.

Dr. Talbert says: “The notable thing for me is, I thought of delta checks as a way of picking up misidentified specimens, but even with the large number of specimens in the study, there were only 11 instances of misidentification.”

Ninety-four percent of the testing episodes involving delta checks involved no change in test results or suspected problems. Results were not reported as a consequence of delta check(s) in just over half of cases with reported testing problems. Specimens were re-collected in about 48 percent of actions involving suspected problems, and a delta check persisted in 69.4 percent of the re-collected specimens.

One finding Dr. Schifman would like to stress: “When you have a delta check, a problem could involve either the current sample or the previous one. Nevertheless, we found that previous samples were investigated much less often compared with the current sample. This was a striking difference,” with current specimens checked 36 times more often than previous specimens even though 23.8 percent of testing problems involved the previous specimen.

“That was an unexpected finding,” he adds. “We’re not sure why that happened. We speculated that there was some concern about the stability of the previous sample. But I’m not sure that’s the reason because most of these delta checks are done on patients who get frequently tested, and while we don’t have the time differential between the previous and current specimen, I think in most cases, especially for inpatients, it would be less than 48 hours, which shouldn’t affect stability of the specimen. This is speculation, but it’s possible that one of the reasons previous specimens weren’t evaluated is that it’s just a lot more trouble to go back and pull out a specimen. It could just be the inconvenience of going back to check. That’s an opportunity for improvement.”

As for recommendations stemming from the study findings, Dr. Schifman would like to see all laboratory directors closely involved with, and responsible for, approval of delta check rules and their implementation. Eighty-two percent of the institutions that participated in the study follow this practice.

CAP TODAY
X