Home >> ALL ISSUES >> 2017 Issues >> Q&A column, 11/17

Q&A column, 11/17

image_pdfCreate PDF
Editor: Frederick L. Kiechle, MD, PhD

Submit your pathology-related question for reply by appropriate medical consultants. CAP TODAY will make every effort to answer all relevant questions. However, those questions that are not of general interest may not receive a reply. For your question to be considered, you must include your name and address; this information will be omitted if your question is published in CAP TODAY.

Submit a Question

We devote the full column this month to a question about verification of the analytical measurement range (linearity) as part of test method verification.

Q. A laboratory owns chemistry analyzers from company X. Company X recommends that its customers use company X’s calibration material to perform their linearity studies, starting with the highest concentration and using the chemistry analyzer’s autodilution feature to provide a total of four measurable concentrations and a final zero point. Does this protocol fulfill CAP checklist requirements?

A. Limiting this discussion to FDA-approved methods, the primary requirement is to verify the reportable range of the method to ensure the manufacturer’s claims can be met by the installed equipment. This is addressed in CLIA by the standard 493.1253(b)(1)(i)(C). The corresponding CAP requirements that need to be met to fulfill verification of the reportable range are in the discipline-specific checklists. For chemistry they are CHM.13600, “AMR Verification,” and CHM.13710, “Diluted or Concentrated Samples.”
The reportable range includes the primary measurement range for an unmodified sample plus any sample modification that can be performed to allow for an actual valid measurement to be made—typically dilution or concentration. Dilution of high samples with correction of the measured concentration for the dilution factor is the most common means of extending the reportable range. This is such a typical procedure that manufacturers often give specific dilution criteria, such as diluent and sample-to-diluent ratios, for their methods. In addition, manufacturers often establish automated dilution protocols for their analyzers to implement validated protocols for the laboratory’s convenience. For this issue, verification of a method’s reportable range, therefore, requires evidence of a linear response across analyte concentrations in the primary measurement range and documentation of appropriate dilution protocols that may be used to extend this range for samples with elevated concentrations.
Although there are no explicit requirements in the CLIA regulation or in the CAP checklist that must be met for verification of method linearity, the requirement for periodic calibration verification could be taken as the minimum that could be done: two points near the extremes of the linear range plus an intermediate point. Method verification on implementation, however, should be more rigorous while still meeting these minimal requirements. What this implies is more intermediate points between the limits of the method’s primary measurement range than the one point required for calibration verification. Again, the actual number of intermediate points is not specified in the CLIA regulation or CAP checklist.
Verification studies require both target value and acceptability criteria for each sample. The target values for the samples used to verify the primary measurement range samples depend on how these samples are obtained. If as part of a kit (linearity or calibration), these values will be supplied with the samples. For studies using in-house samples, either fixed-ratio mixtures of the high and low samples or dilutions of the high sample may be used; the target values are calculated based on the ratios of sample-to-sample or sample-to-diluent that were used. However, if dilution of the high sample is the method chosen, this brings up another concern for verification of the reportable range: establishing that the dilution protocol for the method is valid in the laboratory itself. This is part of verification of the reportable range for a method, since this dilution protocol allows for measurement of high samples beyond the limit of the primary measurement range.
Although it is common and good practice to verify the manufacturer’s dilution protocol (including autodilution), neither the CAP checklist requirement nor the CLIA standard for reportable range mandates experimental verification of the manufacturer’s dilution protocol. There are reasons for the manufacturer’s autodilution protocol to not work in practice, such as inaccurate small-volume pipetting if autodilution is done by reduced-sample volume or poor sample mixing for both onboard automated physical dilution and reduced-sample volume autodilution. Experimental verification of the autodilution process would serve to document that the instrument performs as the manufacturer described, but for CAP-
accredited laboratories, the checklist requirement can be read to allow the laboratory director to accept the manufacturer’s validation as sufficient evidence that the autodilution process will serve to properly dilute a sample.
However, not only is this not ideal practice, but the laboratory may be determined to not be in compliance with the CLIA reportable range requirement by state department of health inspectors for CLIA on validation or complaint inspections. Even though the regulation itself does not discuss dilution protocols, the Centers for Medicare and Medicaid Services’ State Operations Manual, appendix C, provides guidance to the state inspectors on the interpretation of the regulations. Under the designation D5421, the guidance states: “Verification of reportable range may be accomplished by: . . . [e]valuating known samples of abnormal high and abnormal low values”; the associated inspector’s probe states: “How does the laboratory verify and document the accuracy of the results for diluted specimens?” The inspector may interpret this to require experimental proof that diluted specimens, including autodilution, produce accurate results in the test system. In this case, approval of the manufacturer’s dilution protocol as “validated” by the laboratory director may not be considered to be in compliance with the CLIA reportable range standard by a state department of health CLIA inspector applying and interpreting the guidance and probe.
The laboratory cannot use the dilution protocol to make targeted intermediate samples for verification of the primary measurement range and then turn around and assume, given a linear response to the dilution protocol, that the protocol itself is verified. This is a circular argument that only proves internal consistency, not that valid and accurate results are being generated by the instrument. To fully verify the reportable range, the practice would require obtaining appropriate samples that either demonstrate linearity of the primary measurement range of the instrument or show that the dilution protocol provides appropriate results from high samples as claimed by the manufacturer. If the laboratory director decides to simply accept the manufacturer’s validation of the stated dilution protocol as “verification,” this would be acceptable under the current CAP checklist requirement, but may well be cited as not meeting the CLIA standard if reviewed by a state inspector using the CLIA guidance and probe.
A separate consideration is where in the primary measurement range these intermediate sample values should fall. Again, there is no specific requirement in the CAP checklist or CLIA regarding either method verification or calibration verification. Although selecting at least one sample with a value near a decision point, such as the upper limit of the reference range, may appear to be a good approach, there are two issues with this. The first is strictly convenience: Finding a sample with known concentration near a specific point in the linear range or making such a sample (by dilution or admixture) requires considerable manipulation and effort. Verifying multiple different tests requires separate calculations and manipulations for each test method based on the specific primary measurement range and location of the relevant decision point within that range. Second, consideration must be given to where in the range the decision point falls and the criteria used to verify the linearity of the method. Least-squares statistics (such as slope and correlation coefficient) are commonly used. In the case in which the decision point is close to one end of the primary measurement range (such as aspartate aminotransferase, where the decision point is around 40 U/L, with the upper end of the range often above 1,000 U/L), the statistical tool may hide nonlinearity around this decision point because of its closeness to the lowest level sample in the study (presumably close to zero). If the laboratory director desires to verify performance at decision points, fixed criteria would be most effective (such as ± five percent or two units) rather than using a statistical tool that merges all sample points together. If using least-squares statistics, evenly spaced samples would cover the range more effectively than attempting to position a sample at one end of the range near one of the samples used to assess the extremes of the linear range.

William J. Castellani, MD, Adjunct Professor, Department of
Biochemistry and Molecular Pathology, Penn State Hershey College of Medicine
Hershey, Pa., CAP Interregional Commissioner and Member, CAP Council on Accreditation

[hr]

Dr. Kiechle is a consultant, clinical pathology, Cooper City, Fla. Use the reader service card to submit your inquiries, or address them to Sherrie Rice, CAP TODAY, 325 Wau­ke­gan Road, Northfield, IL 60093; srice@cap.org. Those questions that are of general interest will be answered.

CAP TODAY
X