Home >> ALL ISSUES >> 2020 Issues >> Addressing the shortcomings of ANA testing by IFA

Addressing the shortcomings of ANA testing by IFA

image_pdfCreate PDF

“It’s almost as though you can’t have lupus by classification criteria if you don’t have a positive ANA of at least 1:80.”

The 2019 criteria, Dr. Wener said, give ANA positivity “a central role—not just a participating role—in lupus classification. In turn, that puts more burden on laboratories to know their ANA method’s 95th percentile reference range cutoff and encourages laboratories to convey that information to clinicians.”

With 1:80 now the entry criteria for lupus, that raises the question, what is a 1:80?

Recommendations published in 2014 said “a proper [screening] ANA by IFA is dependent on reagents, equipment, and other local factors; thus, the screening dilution should be defined locally” (Agmon-Levin N, et al. Ann Rheum Dis.2014;73[1]:17–23). And “an abnormal ANA should be the titer above the 95th percentile of a healthy control population. In general, a screening dilution of 1:160 on conventional HEp-2 substrates is often suitable for ANA detection,” Dr. Wener said, citing the 2014 ANA assessment guidelines.

That recommendation is in conflict with the newly established entry criteria for lupus diagnosis, he said. “On the one hand, we’re saying labs should pick 1:160 for the screening titer. Oh, but by the way, 1:80 is the entry criteria for lupus. Clearly, as a profession we need to clarify what we mean by a positive ANA.”

The ACR and EULAR proposed the 1:80 ratio based on a systematic literature review and meta-regression of diagnostic data on the performance of ANA for classifying lupus, Dr. Wener said (Leuchten N, et al. Arthritis Care Res. 2018;70[3]428–438). “But the implicit assumption with this analysis is that all IFA assays give the same result.”

Given the evidence, he said, “I just don’t think that’s likely to be true. So there’s a heightened need to standardize if we’re going to be supporting the clinical groups and epidemiologic groups that are using this titer.”

The good news, he said, is that a number of approaches are underway to improve consistency of ANA reporting. For example, the CAP Diagnostic Immunology and Flow Cytometry Committee, to which he is AACC liaison, is “looking into [this] in a more formal way.”

Organizations and industry would need to coordinate efforts, he said, adding, “I would think organizations like ACR, EULAR, AACC, and CAP might work with the FDA to do this.” He noted a couple of examples—INR for prothrombin time normalization, efforts to standardize tests like cholesterol—and said, “I think it’s time for us to think about how to do this for ANA.”

Laboratory directors and staff can improve consistency of reporting at individual labs by “knowing the ANA population prevalence using your lab’s method,” he said. Laboratories should also report the ANA method used for screening—in fact, in 2019 the CAP added a new Laboratory Accreditation Program checklist requirement that says laboratories should include on the ANA report a description of the method used for ANA screening (if the method is not explicit in the test name) (IMM.39700).

Automation is another path to improved consistency among laboratories, Dr. Wener said. “Automated instruments set thresholds for positivity based on fluorescence intensity. Essentially, a single point calibration above or below the cutoff is what’s considered positive. This is coordinated with fluorescence light intensity.”

But individual labs can do nearly the same thing by having an endpoint calibrator or single point calibration above or below a threshold, he said. “The current positive and negative controls are rarely used at this threshold level. But labs can develop or purchase endpoint calibrators that would serve this role.”

The pivotal question is whether advanced automation can be used to address ANA by IFA testing’s shortcomings, said Melissa Snyder, PhD, co-director of Mayo Clinic’s antibody immunology laboratory, who presented during the same AACC session on whether automation can bring ANA testing “out of the dark room and into the modern laboratory.”

The 2014 study by Bizzaro, et al., that compared six automated platforms—Aklides, EuroPattern, Nova View, Helios, Zenit G-Sight, and Image Navigator—found about 90 percent agreement on positivity (Autoimm Rev. 2014;13[3]:292–298). The authors sent 144 ANA sera to six laboratories for manual ANA IFA testing, identified a consensus result for each sample (excluding 17 positive and six negative samples for which no consensus could be reached), and then repeated testing on the six automated platforms. There was more variability among the systems on the negative samples, ranging from 79 percent to 94.1 percent agreement.

“So good consensus on the positive agreement, a little less on the negative agreement,” Dr. Snyder said.

Bizzaro, et al., also compared estimated titer to manual titer and automated pattern interpretation to manual interpretation, Dr. Snyder said. Titer agreement (among five platforms only), which the authors measured using a Spearman’s rho calculator, ranged from .627 to .839. The platforms had greater variability with regard to pattern agreement (four were compared), ranging from 50 to 80 percent.

They also looked at a comparison of light intensity unit as a positive/negative cutoff, and how varying the light intensity unit affected sensitivity and specificity as compared with the consensus result. There was “fairly decent standardization there as well.”

“What I take away from this is that these systems can give us a bit of help with standardization on positive/negative agreement,” Dr. Snyder said. Pattern and titer agreement, on the other hand, “is something we could still work on.”

Dr. Snyder’s laboratory uses an advanced automation platform to perform ANA testing. The system automates not only slide and sample processing but also slide interpretation, pattern identification, and titer estimation, “based on the fluorescence intensity read from the digital image” rather than serial dilution.

While “we certainly have reduced our technologist’s time in terms of reading, it’s critical to note that you still need technologist expertise with these systems,” she said. Mayo Clinic technologists review results from the automated readers, focusing on positive/negative interpretation and pattern. “They might agree with what the computer calls, or they might disagree, at which point they would make a change.”

To assess the performance of the automated system, her laboratory collected data on 1,559 ANA samples submitted for IFA testing and compared the automated slide reader’s interpretations with the results that were eventually released to the clinical record.

The laboratory’s technologists and the automated slide reader agreed on almost 100 percent of the negative samples. (The slide reader identified 909 samples as negative; technologists didn’t identify any of those samples as positive but repeated testing on two samples.) However, they confirmed as negative 26 percent of the samples the computer identified as positive.

“The cutoff for positive/negative on the computer may be set a little on the low side,” Dr. Snyder speculated. Overall, positive/negative agreement between the manual and automated interpretation was 86.6 percent.

Pattern agreement was 45 percent. “This is very much in line with what was studied by Bizzaro’s group,” Dr. Snyder said. In the majority of cases in which technologists disagreed with the automated slide reader on pattern, the sample was ultimately determined to be negative.

Overall, automated systems can lead to improved qualitative agreement, she said, while improvements in pattern and titer agreement “have not yet been realized.”

“We are seeing a little bit more objectivity in our interpretation, particularly in our positive/negative agreement.” But the expertise of the technologists is still a critical component to performing ANA by IFA testing, she said, even with automation of slide reading.

Charna Albert is CAP TODAY associate contributing editor.

CAP TODAY
X