Tilting at perfect timing for QC

 

 

 

October 2007
Feature Story

Anne Paxton

Having everything in the laboratory running like clockwork: That's a good thing, right? Not so fast. When it comes to quality control, "clockwork" might not be the best model.

That was the theory of Sanford Robbins III, MD, a few years ago when he started looking at optimum strategies for timing routine external QC. He wondered whether a strict schedule of QC run at 8 AM, 4 PM, and midnight was doing the best job of keeping laboratory results accurate.

"At a lot of laboratories, the decisions about when to run QC are usually somewhat arbitrary, and sometimes made for reasons of convenience more than science," says Dr. Robbins, associate chief of pathology at Anne Arundel Medical Center, Annapolis, Md.

Much has been written about QC rules and how to interpret when the instruments are out of control, but little has been done on QC testing frequency and very little on scheduling, he points out. "As we move into continuous-flow mode, which you see in most modern hospitals, the more important point really is when to run QC."

Under the Clinical Laboratory Improvement Amendments of 1988, laboratories must run QC tests at least once during "each day of testing," or more frequently if the instrument manufacturer recommends it, and use the same method they use to test patients. "But if you are running patients at noon and you never run QC at noon, you're really not testing the same way as you test patients," Dr. Robbins says.

So he started looking at the implications of randomly distributing QC tests, to make it likely that over the course of a year you would test all points of the day. The results that he and his coauthor, Curtis A. Parvin, PhD, obtained were published recently in a Clinical Chemistry article, "Evaluation of the Performance of Randomized Versus Fixed Time Schedules for Quality Control Procedures" (Clin Chem. 2007;53[4]:575-580).

For Dr. Robbins, the seeds of the idea were planted years before. "My father was in the canning business, and they had talked about a system for randomly doing QC on cans, so instead of every 10th or 100th can, they would randomly pick a can off the line for testing. Because you may have an error that is not random, and if it's in a pattern that's out of phase with the QC, you may never detect it."

In the laboratory the situation is somewhat analogous, whether QC is moved around over time or moved around in position. "Let's say you have a plate and 40 test wells, and you always put QC in positions one and two. If you have a defective batch of plates where problems are in wells 39 and 40, you're not going to detect that if you never move around on different wells."

A standard, fixed QC schedule could create the same problem. "You want to run QC at the point where it's most likely that the system is out of control or you're most likely to detect an error, so it does make some sense to run QC at the beginning of the shifts, but you should also be running QC at other times when you are testing patients."

His first trial was through a simple computer program for randomly picking a time of day to run QC. But it didn't get very far. "It requires hundreds of thousands of data points to test a theory like this. In our work environment in a community hospital, there's no way to set up a scientifically valid study using real QC data to test random versus fixed and whether one method picks up errors more quickly."

Dr. Robbins decided instead to set up a statistical analysis to test his theory, and for help he turned to Dr. Parvin, a biostatistician and associate professor in the Department of Pathology and Immunology at Washington University School of Medicine, St. Louis, Mo.

Though he is an expert in information systems and statistics, Dr. Parvin has chosen laboratory QC to be the focus of his research. "It's really a very natural fit," Dr. Parvin says, "because laboratory QC is inherently a statistical decision-making problem. You're repeatedly making decisions based on tests of the hypothesis that 'My system is in control.'"

Dr. Robbins initially hoped to show that a randomized schedule would be better—that if you are primarily interested in the interval of time between when an out-of-control error condition might occur and the next scheduled QC event, on average it would be shorter if the schedule were randomized rather than fixed.

But Dr. Parvin told him a fixed schedule would always be optimal. "I worked out the math to show him that, and in the process we gained some interesting insights about characteristics of random scheduling. There were some positive characteristics relative to fixed scheduling. But they weren't due to average waiting time. They were due to the fact that you could, by random scheduling, make the expected wait time from a given time of day to the next scheduled QC event approximately the same throughout the day."

The two of them started with two assumptions: First, the system could go out of control with equal likelihood at any time of day. And second, once out of control, the system would stay out of control until the out-of-control condition was detected by a QC event.

"So basically we picked a random time during a hypothetical 24-hour period when the system goes out of control, and we measured the time interval until the next QC event occurs, to see how long that is," Dr. Robbins says. "Then we just looked statistically at a bunch of different QC scheduling strategies to see if any were better than the others."

Of course, life in the laboratory is not that simple. The assumption that you can go out of control at any time with equal probability does not reflect the realities of laboratory testing, Dr. Robbins notes. "Things occur in non-random ways; it's not strictly a roll of the dice. You might have one shift with personnel who are stronger or more focused on QC than another shift, and human errors may be occurring because of different people running the tests."

In fact, some out-of-control events are cyclical, he points out. "Let's say, for example, that there's an electrical situation that occurs at certain times of day—a power surge or a power test, and it's causing a change in laboratory results but it's not a persisting problem."

Or perhaps it's an instrument with a part that's gradually failing, or that's sensitive to certain cyclical changes in the environment, such as humidity or the temperature rising as the day goes on. "Say the event occurs at noon every day and only lasts for one hour. If you're always running QC at 4 PM, you have zero chance of detecting it."

However, it was impractical to take into account all of the various possibilities, so for purposes of their study Drs. Robbins and Parvin assumed that out-of-control events would persist until at least the next QC event.

If you are conducting QC testing three times a day at fixed intervals, and you have one time when the system goes out of control, randomly picked in a 24-hour period, basic probability theory will tell you that, on average, it will be four hours before that problem will be picked up at the next QC event, Dr. Robbins explains.

"The question is when you are looking at a bunch of different schedules for randomly assigning QC, what happens to that 'time to detect' when you're out of control? Is it better or worse?"

Four different strategies for QC testing schedules were examined:

  • Strategy 1: QC events scheduled at fixed time intervals.
  • Strategy 2: QC events randomly scheduled within fixed time intervals.
  • Strategy 3: QC events scheduled at random intervals.
  • Strategy 4: QC events scheduled at a random interval, followed by a series of n QC events scheduled at fixed intervals.

The average interval between QC events was set at eight hours for all of the evaluated scheduling strategies.

Dr. Parvin mathematically derived the expected length of time from the occurrence of an out-of-control error condition to the next scheduled QC event for each of the scheduling strategies. The conclusion was that random intervals did not improve on fixed intervals. "You'll never beat the average time of fixed intervals," he says, "and random scheduling in some instances tends to do way worse than fixed intervals." As Dr. Parvin had predicted, "If the only outcome measure of interest is how long on average between an error condition that can occur at any point in time until the next QC event, the best that you're going to do is your interval divided by two—that is, for an eight-hour interval, four hours."

But if you also want a more uniform expectation about the waiting time to the next QC event depending on what time of day a patient's sample is tested, the study found that randomizing the schedule could actually add value, Dr. Parvin says. "What our paper showed was you can come up with a strategy that will allow you to randomly schedule QC events in a way that the expected length of time between an error condition and the next scheduled QC event wasn't optimal, but it was close to optimal." And what's lost, he says, is more than made up for by the length of the waiting time to the next scheduled QC event (or since the last scheduled QC event) becoming fairly independent of the time of day.

The authors concluded that there is a way to meet both of these outcome measures, a strategy that compromises by combining a fixed and random schedule.

From Dr. Parvin's point of view, the research collaboration didn't fit into the standard pattern. "The original thinking came from a pathologist, but the paper evolved when he brought in someone with a statistical background. So it made a really nice story showing how statistical thinking adds value."

It would be a fascinating study to see if different QC schedules detected more errors or detected errors more quickly than others in an actual clinical laboratory setting, Dr. Robbins suggests. "It would be a difficult test to run, because to do it in the fairest way you'd have to run both systems in parallel, so you'd probably be doubling the QC. It would require a tremendous number of data points, because the instrumentation nowadays is so good that going out of control is just not that common."

Dr. Robbins envisions a system that would take decisions on when to run QC out of the hands of the technologist and turn them over to an unbiased, statistically valid program. He owns a patent on some aspects of random QC, and there has been some interest from manufacturers, he reports.

"Obviously you can't run a control on every patient in most testing environments. But in my mind an optimal QC system would, without increasing QC resources, do a better job of monitoring the entire system over time than conventional external QC scheduling that is taking place in most laboratories. This might be a combination of using patient data as surrogate QC, using scientifically valid methods for determining optimal QC frequency, and using a QC scheduling strategy that is unbiased while testing the entire system over time."

Not everyone will want to move to random scheduling, Dr. Parvin acknowledges. But the research collaboration gave him a new perspective on laboratory QC. "The outcome measure I've always focused on is to minimize the risk of reporting bad results, so with scheduling what I was looking for was to minimize the length of time between the out-of-control error condition occurring and the next scheduled QC."

"Our research helped me realize that while that is important and probably the primary thing to be interested in, the one Sandy Robbins raised—trying to ensure that the time of day is somewhat independent of when QC events occur—has inherent value, because it ensures the whole process is being examined."


Anne Paxton is a writer in Seattle.