Home >> ALL ISSUES >> 2015 Issues >> Paths to validating, using urine sediment analyzers

Paths to validating, using urine sediment analyzers

image_pdfCreate PDF
Dr. Skelton

Dr. Skelton

“One big issue was that a patient would get a positive screening dipstick [at] point-of-care or in the community,” he says. Then when the person’s urine specimen arrived at the central lab, staff would get a negative dipstick result and therefore skip the microscopic analysis. Some of those patients were subsequently diagnosed with bladder cancer.

“So we did a whole validation and switched over to the Arkray [Aution Hybrid] AU-4050,” which integrates Arkray’s dipstick reader and the Sysmex UF-1000i flow cytometer, Dr. Skelton says.

Like the others with whom CAP TODAY talked who have validated their sediment analyzer using manual microscopy as the reference method, Dr. Skelton has plenty of pointers that can help keep the correlation process and clinical care on track.

How the Lahey laboratory correlated the Arkray’s urine microscopy results with its manual ones depended on the test. Dr. Skelton notes that the new flow cytometry analyzers report quantitative numbers for five parameters: white blood cells, red blood cells, bacteria, squamous epithelial cells, and hyaline casts. “For these, we converted the [Arkray’s] quantitative data to semiquantitative ranges that matched our existing semiquantitative reporting scheme.”

At first, there was a disconcerting mismatch between the automated and manual bacteria counts. The concordance was “way off,” Dr. Skelton says, “because we simply converted bacteria per microliter [which the AU-4050 reported] to bacteria per high-power field using a factor of 0.18.” While that approach works for the other quantifiable elements, not so for bacteria. “The flow cytometer has a dye in it that picks up all kinds of bacteria that you don’t see under the microscope,” Dr. Skelton says. Also, when looking at urine under the microscope, bacteria are seen in only one plane unless you focus up and down. “For those reasons, we ended up with many-fold more bacteria on the absolute count on the flow cytometry than under the microscope.”

The laboratory “resolved this dilemma by using Arkray’s recommended conversion of the quantitative bacteria count to semiquantitative ‘negative, rare, few, moderate, many’ categories. Once we did that, the concordance between flow cytometry and manual microscopy was very consistent from specimen to specimen,” Dr. Skelton says.

Even so, the quality of the urine collection still limited the bacteria count’s clinical value, he says. To tackle that issue, Dr. Skelton says he may in the future use flow cytometry’s “powerful quantitation of squamous epithelial cells, WBCs, and bacterial counts to flag improperly collected specimens.” This would then electronically trigger an immediate request for a new specimen to the provider responsible for instructing the patient in proper urine collection.

The red blood cells also correlated poorly. “We did a concordance table using the semiquantitative approach and obtained 53 percent agreement including a few samples with significant discrepancy,” Dr. Skelton says. “I did a medical record review and suspect that the flow cytometer matched the clinical picture better than the microscopic. So now we are relying on the numeric value from the flow cytometer.”

Dr. Skelton attributes some of the discordance to “tech-to-tech variation,” which the laboratory addressed with education.

For pathological casts, small round cells, yeast, and crystals, the Arkray analyzer generates a value used to flag a sample that needs manual microscopic review. This is “analogous,” says Dr. Skelton, “to a hematology analyzer ‘blast’ or ‘variant lymphocyte’ flag.”
Although the instrument has a flag for sperm, the lab doesn’t report sperm “because of the history of misdiagnoses in the past using manual microscopy. We just shut off the sperm flag and never evaluated it,” he says.

“The cutoff values to trigger the review flags are user definable,” Dr. Skelton says. And, he stresses, “It is absolutely essential to get these cutoff values right for a lab’s specific population so one doesn’t end up with way too many flags or miss clinically important findings.” In fact, he adds, “Getting these flags optimized using nonparametric statistical tools is probably the single most important factor [for] successful implementation in terms of return on investment and clinical effectiveness.”

To optimize the flag cutoffs, Dr. Skelton categorized samples as true positive, if they had clinically significant manual microscopic findings, or as true negative, if there were no such findings. He says this allowed him to use the Wilcoxon signed-rank test to demonstrate a statistically significant higher value in the quantitative measure from those samples that had microscopic findings.

Dr. Skelton says he next performed receiver operating characteristic curve analysis to determine the sensitivity and specificity at all possible cutoffs to set an optimal value for the user-defined review flag. He also reviewed the medical records of patients whose specimens were close to the optimal cutoff. The goal there was to “weight the relative clinical and workflow impact of false-negative and false-positive review flags.”

Dr. Skelton set the crystal flag sensitivity to avoid missing clinically important crystals, which he notes actually constitute “a small subset of all the crystals.”

“The small round cells correlated very well [to manual microscopy]. These include renal epithelial cells and oval fat bodies. The [latter] indicate degenerating cells which can be clinically important,” Dr. Skelton says.

“By using clinically important findings on microscopy as the gold standard and then optimizing the number at which the instrument will flag you to go look for those things under the microscope, we obtained an autoverification rate of 80 percent of our urine samples,” Dr. Skelton sums up.

Dr. Skelton’s laboratory does use the AU-4050’s cross-check function that compares the instrument’s dipstick results to the sediment analysis. “It is helpful,” he says. “We initially had them set too high and had too many cross-check flags [until] we figured out which ones really mattered and backed off on it.”

One example of what he views as important is a cross-check that flags for trichomoniasis. The cross-check rule fires, he says, if the sample is negative for leukocyte esterase but has greater than 10 WBCs per high-powered field. “The leukocyte esterase could be negative because the urine has lymphocytes instead of granulocytes but it could also be Trichomonas. You look under the microscope and can see the Trichomonas swimming in the urine with movement of the flagella.”

To perform urine sediment analysis, the renal laboratory at Mayo Clinic in Rochester, Minn., uses Beckman Coulter’s Iris iQ200, which also includes an automated dipstick reader. And, as the result of a three-way correlation of that instrument, the Sysmex UF-1000i, and manual microscopy, the lab uses the Sysmex for bacteria quantification, says nephrologist John C. Lieske, MD, professor of medicine and medical director of the Mayo renal testing laboratory.

The correlation study showed that “the Sysmex UF-1000i was excellent for quantifying bacteria, and for predicting a positive culture, it was quite good,” Dr. Lieske says. He and colleagues also conducted a study that established bacteria cutoffs for a positive urine culture. In screening urines, the laboratory now uses the cutoffs to determine whether to do a culture. According to an abstract of the article reporting the study, the Sysmex UF-1000i “could reduce unnecessary reflex urine cultures by 55 percent” in the population studied (Giesen CD, et al. Clin Biochem. 2013; 46[9]:810–813).

The laboratory uses the Iris iQ200 to “basically screen out fairly normal urines,” which Dr. Lieske says represent about two-thirds of the samples, given the patient mix. The laboratory first tests the urine specimens for protein on a Roche chemistry analyzer. Those containing abnormal amounts of total protein “get shunted to the manual microscopy,” Dr. Lieske says, as they are “quite likely to have pathological elements such as dysmorphic red cells or casts.”

The rest of the samples go to the iQ200. The instrument “takes pictures of urine as it goes by and flags” what it finds, Dr. Lieske says. “It saves the pictures, and the technician takes a look and verifies” whether he or she sees anything pathologic. They manually review anything except hyaline casts or common urinary crystals. “The technician can also reclassify elements if needed.”

“Hyaline casts are not considered pathologic in and of themselves,” he says. “But you have to differentiate those from ones that, say, have red cells in them. If they did, then they would be red cell casts,” which are seen in people with glomerulonephritis. As another example, “You can have fatty casts with nephrotic syndrome.”

Virginia Mason Medical Center in Seattle emphasizes looking for dysmorphic red cells in urine, says Marshall Rafferty, MT, CLT(HHS), urinalysis lead for the clinical laboratory. And the AU-4050, which the laboratory validated in August 2014, does a good job flagging them, he says. Rafferty thinks that a significant number of cases of chronic microhematuria in people roughly under age 40 or 50 are actually dysmorphic red cells often coming from the kidney. The dysmorphic red cells, he says, are sometimes found with red cell casts. He finds that “much of the time, the patients have a mild IgA nephropathy” or similar condition—and unnecessary repeat cystoscopies.

The Arkray’s RBCs, WBCs, and bacteria count were consistent and accurate when compared with manual microscopy, Rafferty reports. However, “It’s hopelessly inconsistent for hyaline casts,” though he’s never really worried about those, he says. And judging from physicians’ notes, he thinks physicians don’t pay attention to them. “In fact, I doubt we are even going to report those, although we make a high count to be one of our flags to review microscopically, just to see what’s going on.”
Rafferty predicts that once the lab begins using the Arkray analyzer, which he expects to be this spring, the instrument will eliminate about 30 percent, and possibly more, of the manual microscopies. He calls the 30 percent figure “conservative.”

Evan Sylvester, MPH, MT(ASCP), clinical supervisor for microbiology at Virginia Mason Medical Center, says that as part of the validation, they compared the bacterial cells per microliter on the AU-4050 to positive urine cultures to find a threshold indicating “a true pathogen or true UTI.” As a result, they predict the laboratory can probably do 37.5 percent fewer reflex urine cultures.

“Hopefully the nice thing about this validation we have done,” Sylvester says, “is that we will reduce the number of cultures and also reduce the possibility of misleading the clinician into interpreting nonactionable results” as actionable. “You don’t want the doctor to act on something that truly isn’t the cause of the problem. It’s a fine line to balance in microbiology.”

Speaking of a balance, labs validating their automated sediment counters that don’t run a sufficient number of samples, including abnormal ones, can skew their correlation results. Anthony Butch, PhD, chief of clinical chemistry and toxicology at UCLA, points out that laboratories will see, for example, “more agreement or concordance for red cell counts if the urine samples analyzed contain large numbers of red cells that are five times or more higher than the cutoff for normal versus abnormal cell counts.”

Conversely, suppose your cutoff is 10 and you select urines that mostly have red cell counts ranging from five to 15. “Then the concordance will probably not be as good if compared to urine samples that have 50 to 100 red cells in them,” he says. “So the selection of urine samples is critical when interpreting whether the two methods agree based on the criteria of normal versus abnormal.”

His advice: “Try to select samples that cover a wide range of red cell concentrations based on the result that was obtained by the current method.” You can use dipstick readings to help select samples, he says.

Diane Gaspari, SH(ASCP), core laboratory manager for WellSpan York Hospital, York, Pa., which has a Sysmex UF-1000i, advises using about 60 abnormal urine samples for a correlation study. “The samples with pathological casts, yeasts, crystals, etc., would be included in the 60 abnormal urine samples.”

Dr. Butch says when his laboratory validated the Iris iQ200 over a decade ago, he and colleagues had difficulty finding urines with casts other than hyaline casts. “In fact, we really did not fully evaluate the ability of the iQ200 to correctly identify cellular casts since only a limited number of urine samples with [those] were available at the time of the study.”

One way to ensure getting a sufficient number and variety of abnormal samples is to extend the validation, which is what Dr. Skelton’s laboratory did. “Once we had the analyzer in and the technologists were still doing everything with manual microscopy, if they saw an interesting sample, they’d run it on the analyzer,” Dr. Skelton says. For example, they’d run any sample that had pathological casts in it. As a result, “we got a real good sense of how it behaved on all the positives. So there was actually a period of validation that went on for a few months.”

Dr. Skelton says he’d recommend that approach but doesn’t think most labs validate that way due to financial pressure from their institutions to go live right away after buying a new sediment analyzer. By contrast, he says, his lab “had a little more luxury of having more time to optimize [its] process.” That extra leeway may be a consequence of Information Technology having paid for the automated sediment analyzer “with long-term clinical efficiency goals in mind,” he says. “Operations didn’t support the switch because they didn’t see enough financial return in the lab budget.”

Rafferty of Virginia Mason says he had two reasons for supporting his lab acquiring an automated sediment analyzer and neither had anything to do with cutting workload. One was “to create a little more consistency with microscopy. Visual urine microscopy is notoriously subjective.” The other reason, he finds, is that people new to performing manual microscopy sometimes search too long for abnormalities in normal urines. “Especially with bacteria. A lot of people mistake amorphous material for bacteria and report out a 3+ for bacteria and nothing grows in the culture. If the Arkray can expedite [analyzing those negative urines] without any flags, then laboratory staff can spend more time on the difficult specimens.”

Some might say the manual reviews are “where the gold is,” says Edward P. Fody, MD, president of Western Michigan Pathology Associates in Holland, Mich. “In blood smears, those are going to be the leukemias and things like that. In urinalysis, those are going to be the ones with the abnormal casts and things that really tell you something about what is going on with the patient’s urinary tract.”
[hr]

Karen Lusky is a writer in Brentwood, Tenn.

Twitter_Skelton

CAP TODAY
X