Home >> ALL ISSUES >> 2018 Issues >> Q&A column, 2/18

Q&A column, 2/18

image_pdfCreate PDF

Update

It is good practice to set acceptance thresholds for these parameters prior to analysis. An acceptable R2 is typically > 0.95 whereas an acceptable y-intercept is typically 0.2 log10 copies/mL, the commonly accepted variation in HIV viral load from assay to assay.2 Results between the two assays can be additionally visualized using a Bland-Altman plot (for each pair of results, the average is plotted on the x-axis whereas the difference is plotted on the y-axis). This allows for a better visualization of assay agreement at different viral load values. Using the accepted assay-to-assay variation of 0.2 log10 copies/mL, one can determine the accuracy along the reportable range and make adjustments to the reportable range if necessary. Examples of Deming regression and Bland-Altman plots are shown in Fig. 1.<sup>3</sup>

Precision should be assessed at different levels throughout the assay by repeatedly testing at least two pools of quantitated quality control material or patient specimen. Replicates should be tested within the same run as well as within different runs (ideally performed on different days and prepared by different personnel). This allows for the evaluation of interassay and intra-assay precision, respectively. The mean, standard deviation (SD), and coefficient of variation (SD/mean) should be calculated for each specimen pool. Acceptance criteria for these parameters should be determined before validation.

Finally, prior to test implementation the laboratory should define the reportable range of the assay. This entails characterizing the linearity and accuracy of the assay along the proposed reportable range as well as determining the limit of detection. Linearity and accuracy can be assessed using different levels of quantitated quality control material that span the proposed reportable range. This can be analyzed in a fashion similar to the data obtained in the method correlation study (linear least squares regression and Bland-Altman analysis). The information obtained from both of these studies should adequately characterize the assay’s performance along the reportable range and allow for determination of the upper and lower limits of quantitation. The final step in verifying the reportable range is to determine the limit of detection, which is defined as the level at which analyte is detected 95 percent of the time. If a candidate limit of detection is known, this can be determined simply by preparing a pool of analyte with the candidate concentration and running it 20 times to demonstrate a detection rate of ≥ 95 percent. Alternatively, if a candidate LOD is not known, this could be determined empirically by testing different concentrations in replicate or determined mathematically using probit analysis.2

The reference interval stated by the manufacturer may be transferred if the stated reference interval is thought to be applicable to the patient population the clinical laboratory serves.1

While the studies outlined above are adequate for verifying FDA-cleared HIV viral load assays, additional studies are needed if a non-FDA-cleared assay or specimen type is being evaluated.1 This entails the evaluation of potentially cross-reacting targets, such as related viruses, and is best assessed by testing specimens positive for these targets. Additionally, the effects of potential assay inhibitors should be examined. This can be accomplished by spiking previously positive specimens with a potential inhibitor (i.e. bilirubin) and examining the effect on viral load.

Quality control. Once a platform has been adequately verified, it is still incumbent on the laboratory to ensure the continuous accuracy of reported results. This demands a detailed quality control plan.

To prevent incremental changes in assay quantitation, quantitative molecular assays require calibration with control material on a semiannual basis. Even if the manufacturer performs the assay calibration for each reagent lot, as seen with some commercially available assays, the laboratory is still required to validate the assay analytical measurement range (AMR) semiannually or whenever there is a significant change that could affect assay performance (i.e. significant maintenance). This can be accomplished using a panel of quantified control material in the appropriate specimen matrix. Criteria should be set for acceptance, often requiring that obtained values be within 0.2 log10 copies/mL of expected values.

Though recalibration and reverification of the AMR helps safeguard against assay variation over time, day-to-day accuracy is best monitored using appropriate controls. Internal quality controls are included in all of the currently marketed FDA-cleared HIV viral load assays. The use of internal controls in each patient specimen helps detect the presence of any inhibitors of PCR that could potentially cause false-negatives. In addition to internal controls, a minimum of three levels of external quality controls should be run with each batch of tested specimens: a negative, low positive, and high positive control.4 These are typically provided by the assay manufacturers and have ranges in which the values for the low positive and high positive controls must fall in order for results from a run to be accepted. Laboratories should track these values (including means, SDs, and CVs), since even incremental changes within the manufacturer’s acceptance criteria may herald a change in analytical performance. Laboratories may choose to investigate based on certain threshold values, such as a control value greater than 2 SD from the mean. Levey-Jennings charts and Westgard rules are also excellent tools for control tracking. Although certain Westgard rules may occur due to readily explainable circumstances (such as a change in control lot material), any unexplained violations should prompt investigation. A laboratory may consider the additional use of control material beyond what the manufacturer provides. This could entail a patient sample pool that is included in each batch of specimens and monitored in a fashion similar to the manufacturer-provided controls. This practice allows for the detection of assay issues that may remain undetected by the manufacturer-provided controls. The rate of overall internal and external control failure should be monitored and reviewed over time.

While one purpose of a negative control is to safeguard against contamination, often the only indication of a low-level contamination of a high-volume test is a change in positivity rate. As such, it is also important for the laboratory to monitor overall positivity rates. This can be done in a stratified manner, keeping track of not just simply positives but rather the amount of very low positives, low positives, and high positives. This approach can be useful since issues with low-level contamination may present specifically as a rise in low-level positive values.

Another important safeguard against assay contamination is environmental monitoring. This should be done at a frequency proportional to the testing volume. Multiple surfaces at high risk for contamination should be sampled and tested. Any positivity should prompt intensive decontamination and should prevent the release of patient results if there is a potential they could be affected. In addition, the percentage carryover for automated systems should be determined by testing alternating positive and negative specimens (checkerboard layout) during the initial verification to document that there is no carryover of sample or amplicon during the entire process.

Offering HIV viral load testing is an excellent way for laboratories to potentially enhance the care of HIV patients in the hospitals they serve. However, the responsibility of providing this testing should be carefully weighed. If improperly managed, HIV viral load testing has the potential to cause more harm than good. Rigorous verification, quality assurance, and quality control are absolutely essential to ensure patient care benefits are truly realized.

  1. Burd EM. Validation of laboratory-developed molecular assays for infectious diseases. Clin Microbiol Rev. 2010;23(3):550–576.
  2. Wolk DM, Marlow EM. Molecular method verification. In: Persing DH, Tenover FC, Hayden RT, et al., eds. Molecular Microbiology: Diagnostic Principles and Practices, 3rd ed. Washington, DC: ASM Press; 2016:721–744.
  3. Sahoo MK, Varghese V, White E, et al. Evaluation of the Aptima HIV-1 Quant Dx assay using plasma and dried blood spots. J Clin Microbiol. 2016;54(10):2597–2601.
  4. Bankowski MJ. Molecular microbiology test quality assurance and monitoring. In: Persing DH, Tenover FC, Hayden RT, et al., eds. Molecular Microbiology: Diagnostic Principles and Practices, 3rd ed. Washington, DC: ASM Press; 2016:745–753.

Neil Anderson, MD, D(ABMM), Assistant Professor, Pathology and Immunology
Assistant Medical Director, Clinical Microbiology Laboratory, Washington University School of Medicine in St. Louis
Member, CAP Microbiology Resource Committee
[hr]

Dr. Kiechle is a consultant, clinical pathology, Cooper City, Fla. Use the reader service card to submit your inquiries, or address them to Sherrie Rice, CAP TODAY, 325 Wau­ke­gan Road, Northfield, IL 60093; srice@cap.org. Those questions that are of general interest will be answered.

CAP TODAY
X