Home >> ALL ISSUES >> 2014 Issues >> In lab QC, how much room for improvement?

In lab QC, how much room for improvement?

image_pdfCreate PDF

Anne Paxton

October 2014—The debut of the CMS’ new quality control option, IQCP, has sharpened the focus on QC in the laboratory and raised hopes that risk management concepts can make QC more robust. But one of the most highly regarded quality control experts in the U.S. voices skepticism about the impact of IQCP—and indeed, about U.S. quality control standards in general.

As a voluntary, customizable QC option under CLIA, IQCP or Individualized Quality Control Plan is expected to give labs greater flexibility in achieving QC compliance. However, the CLIA QC standards, unchanged since 2003, will remain the same—and that’s a problem, says James Westgard, PhD, who spoke about QC weaknesses at the 2013 Lab Quality Confab presented by The Dark Report. He believes that CLIA’s sluggish evolution on QC has nurtured a nationwide attention deficit on the subject of meaningful quality management.

[hr]An interview with James Westgard, PhD[hr]

From his standpoint, the state of the practice in QC falls far short of perfect, and even well short of sufficient. “My opinion is that CLIA has kind of frozen quality management practices to the early 1990s,” Dr. Westgard says.

A co-founder of Westgard QC, in Madison, Wis., Dr. Westgard is author of several books on laboratory quality management, including Six Sigma Risk Analysis: Designing Analytic QC Plans for the Medical Laboratory (2011). He has more than 40 years of experience in laboratory quality management.

Dr. Westgard, an emeritus professor at the University of Wisconsin in Madison, was the first chairman of the Evaluation Protocols Area Committee in CLSI, the Clinical and Laboratory Standards Institute (then known as NCCLS). Trained as an analytical chemist, “I got started in the laboratory about the time automation was just making big inroads, in 1968.” He spent considerable time dealing with methods validation protocols and the best statistical analyses to use. For him, the big question has always been: How do you decide whether a method of testing is actually acceptable or not?

Clinical laboratory professionals used to depend on their own analytic skills to answer that question. “It was really with the introduction of automation in the late ’60s and early ’70s that QC got a strong push,” he says. “That’s because you may have a lot of confidence in your individual skills for performing a test, but once you have a machine doing the test, how do you know it’s right?”

The standard practice of setting control limits at the mean, plus or minus two standard deviations, worked fine for QC—until multi-test instruments came on the scene. “Everyone knows that about one out of 20 results is expected to be outside two SD limits; that’s the false rejection rate inherent with these two-standard-deviation limits. But that’s based on one test. Once you start doing six, 12, or 20 tests with a multi-test system, you have a multiplier effect on that false rejection rate,” Dr. Westgard says.

In those days, with the simultaneous-batch-type analyzer, you couldn’t just pick one test to run, he explains. “You used up the capacity of the system every time you had to do a repeat test. So we soon got to the point where, because of the number of tests being run, you had about a 50 percent chance with each run that at least one test was out of control.” That problem stimulated his work on QC and led to the development of a multi-rule QC procedure that is commonly known as Westgard Rules and became a standard of practice in laboratories in the 1980s.

The Clinical Laboratory Improvement Amendments of 1988 were supposed to extend the practices of QC to all laboratories in 1992, when CLIA ’88 regulations took effect. “The regulations themselves described what your QC procedures are supposed to be able to do: monitor precision and accuracy of the system and detect medically important errors.” The promise was that “manufacturers would make claims for their precision and accuracy and what a lab should do as far as QC, and if the FDA approved that claim, then all the lab had to do was follow the manufacturer’s directions.”

But that part of CLIA—“QC clearance”—was never implemented. “First there was resistance by the manufacturers. Then later on, the Food and Drug Administration decided they had enough on their hands, and they didn’t want to deal with it either.” Every two years, from 1992 on, there would be a new final rule putting off the effective date of the FDA clearance of QC —until 2003, Dr. Westgard says. “Then they declared they didn’t need the FDA clearance of QC anymore because the analytic systems had gotten so much better.”

At that time, many manufacturers were arguing that the new test systems and point-of-care devices had built-in controls and labs didn’t need to be running external controls. Under CLIA’s final rule in 2003, the CMS compromised by establishing the EQC, or Equivalent QC procedures. “That allowed labs, instead of doing two levels of controls per day, to go to two levels per week or even two levels per month, if they provided certain validation data. One protocol stipulated that the lab should run controls for a 10-day period. If everything was okay, then you were qualified to reduce QC to two levels a month. So if you were stable for 10 days, then you wouldn’t have to test until 30 days went by. Obviously, that’s not how stability testing should work.”

The problem with the whole approach was that the validation protocols that the CMS prescribed were not scientifically valid, Dr. Westgard says. But in spite of that argument having been made up front with CMS, “I think they were just stuck with having to adopt EQC to accommodate the POC test systems that were being widely used.”

In the absence of the FDA’s clearing QC, the CMS’ default minimum—two levels of control per day—became the standard practice in labs. It was the least amount of QC that needed to be done. “There’s no scientific basis for that practice, but because CLIA said so, the minimum became a maximum over time. That’s the nature of regulation.” As a result, most labs fall back on what the regulations require, he says. “You are only required to run controls. The regulation doesn’t require that you run the right QC.”

Despite this low bar, many laboratories are very good, he notes. “But as you increase the workload in labs, and people have less and less time to think about what they’re doing and they’re just trying to keep up with the workload, then there are certain things that fall by the wayside. And that, unfortunately, is what happens with quality practices.”

IQCP was devised, in part, to resolve the EQC problem by offering a risk assessment approach. “In theory, it’s a good approach,” Dr. Westgard says. “It’s just that in practice, laboratories have never done formal risk analysis, and formal risk analysis is not a trivial undertaking.” Laboratories face a steep learning curve in understanding and correctly applying quality practice plans based on risk, in his view.

It’s true that people make judgments of risk all the time. “You look to see what the weather is and decide whether you need an umbrella or not. That’s risk assessment. But that’s not the same thing as looking at the potential for harm in a lab test result and figuring out, ‘Can I risk this or not?’” Even though CLSI has a guideline called EP23A, it is qualitative, it is subjective, and most people don’t understand it, he says.

Unfortunately, no one actually knows how confident we can be in laboratory test results, he says. “We all hope for the best.”

QC should always begin with the question: How good does this test need to be? Dr. Westgard emphasizes. “Then we look at how good we are. And if the test is much better than it needs to be, then it doesn’t take much QC. If it’s only ‘close to’ as good as needed, then you have to monitor the test much more carefully.”

He compares the process to budgeting. “When you set a financial budget, you know that if you spend too much in one place, you run over your budget. The analogy for lab tests is an ‘error budget.’ We know we have certain sources of error like precision and inaccuracy. How much of the budget gets spent by different error sources, and is it possible that we will overrun the budget if a problem occurs?”

“Well, we have information on that, in the form of what quality is required for the test. If you define that in your budget, then you can measure the errors for your methods in the laboratory to be sure they fit within the budget. That should be an ongoing part of quality management: keeping track of how big these errors are and how they relate to the amount of error that is allowable.”

Dr. Westgard concurs that laboratory test quality has improved because of the technology of the diagnostics industry. But the quality demands for any one test change when the use of the test changes. He cites HbA1c as a good example. In the past 10 or 15 years, it has gone from being a test used to monitor diabetes to becoming the basis for diagnosing diabetes and monitoring treatment.

CAP TODAY
X